Minimization of high computational cost in data preprocessing and modeling using MPI4Py

Overfitting Data pre-processing
DOI: 10.1016/j.mlwa.2023.100483 Publication Date: 2023-07-17T09:33:33Z
ABSTRACT
Data preprocessing is a fundamental stage in deep learning modeling and serves as the cornerstone of reliable data analytics. These models require significant amounts training to be effective, with small datasets often resulting overfitting poor performance on large datasets. One solution this problem parallelization modeling, which allows model fit more effectively, leading higher accuracy sets overall. In research, we developed novel approach that effectively deployed tools such MPI MPI4Py from parallel computing handle processes. As case study, technique applied COVID-19 state Tennessee, USA. Finally, effectiveness our demonstrated by comparing it existing methods without concepts like MPI4Py. Our results demonstrate promising outcome for deployment minimize high computational cost.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (24)
CITATIONS (5)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....