Using Machine Learning Hardware to Solve Linear Partial Differential Equations with Finite Difference Methods

DOI: 10.1007/s10766-025-00791-6 Publication Date: 2025-03-04T19:58:25Z
ABSTRACT
Abstract This study explores the potential of utilizing hardware built for Machine Learning (ML) tasks as a platform for solving linear Partial Differential Equations via numerical methods. We examine the feasibility, benefits, and obstacles associated with this approach. Given an Initial Boundary Value Problem (IBVP) and a finite difference method, we directly compute stencil coefficients and assign them to the kernel of a convolution layer, a common component used in ML. The convolution layer’s output can be applied iteratively in a stencil loop to construct the solution of the IBVP. We describe this stencil loop as a TensorFlow (TF) program and use a Google Cloud instance to verify that it can target ML hardware and to profile its behavior and performance. We show that such a solver can be implemented in TF, creating opportunities in exploiting the computational power of ML accelerators for numerics and simulations. Furthermore, we discover that the primary issues in such implementations are under-utilization of the hardware and its low arithmetic precision. We further identify data movement and boundary condition handling as potential future bottlenecks, underscoring the need for improvements in the TF backend to optimize such computational patterns. Addressing these challenges could pave the way for broader applications of ML hardware in numerical computing and simulations.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (32)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....