William Gropp

ORCID: 0000-0003-2905-3029
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Parallel Computing and Optimization Techniques
  • Distributed and Parallel Computing Systems
  • Advanced Data Storage Technologies
  • Interconnection Networks and Systems
  • Embedded Systems Design Techniques
  • Distributed systems and fault tolerance
  • Matrix Theory and Algorithms
  • Advanced Numerical Methods in Computational Mathematics
  • Cloud Computing and Resource Management
  • Scientific Computing and Data Management
  • Numerical methods for differential equations
  • Electromagnetic Scattering and Analysis
  • Computational Fluid Dynamics and Aerodynamics
  • Formal Methods in Verification
  • Numerical Methods and Algorithms
  • Computational Physics and Python Applications
  • Real-Time Systems Scheduling
  • Electromagnetic Simulation and Numerical Methods
  • Stochastic Gradient Optimization Techniques
  • Network Traffic and Congestion Control
  • Meteorological Phenomena and Simulations
  • Algorithms and Data Compression
  • Numerical methods in engineering
  • Model Reduction and Neural Networks
  • Software System Performance and Reliability

University of Illinois Urbana-Champaign
2015-2024

Argonne National Laboratory
2009-2024

Yale University
1987-2023

IEEE Computer Society
2018-2023

Stanford University
2023

Machine Science
2023

UCLouvain
2023

UC Irvine Health
2023

Hôpital Saint-Julien
2022

Institute of Electrical and Electronics Engineers
2019-2022

10.1016/0898-1221(95)90199-x article EN Computers & Mathematics with Applications 1995-11-01

We describe our work on improving the performance of collective communication operations in MPICH for clusters connected by switched networks. For each operation, we use multiple algorithms depending message size, with goal minimizing latency short messages and bandwidth long messages. Although have implemented new all MPI (Message Passing Interface) operations, because limited space only allgather, broadcast, all-to-all, reduce-scatter, reduce, allreduce. Performance results a...

10.1177/1094342005051521 article EN The International Journal of High Performance Computing Applications 2005-02-01

Over the last 20 years, open-source community has provided more and software on which world’s high-performance computing systems depend for performance productivity. The invested millions of dollars years effort to build key components. However, although investments in these separate elements have been tremendously valuable, a great deal productivity also lost because lack planning, coordination, integration technologies necessary make them work together smoothly efficiently, both within...

10.1177/1094342010391989 article EN The International Journal of High Performance Computing Applications 2011-01-06

10.1016/s0898-1221(00)90209-8 article EN Computers & Mathematics with Applications 2000-07-01

10.5860/choice.35-1575 article EN Choice Reviews Online 1997-11-01

The I/O access patterns of parallel programs often consist accesses to a large number small, noncontiguous pieces data. If an application's needs are met by making many distinct requests, however, the performance degrades drastically. To avoid this problem, MPI-IO allows users data set with single function call. This feature provides implementations opportunity optimize access. We describe how our implementation, ROMIO, delivers high in presence requests. explain detail two key optimizations...

10.1109/fmpc.1999.750599 article EN 1999-01-01

I. Parallelism 1. Introduction 2. Parallel Computer Architectures 3. Programming Considerations II. Applications 4. General Application Issues 5. Computing in CFD 6. Environment and Energy 7. Computational Chemistry 8. Overviews III. Software technologies 9. Technologies 10. Message Passing Threads 11. I/O 12. Languages Compilers 13. Object-Oriented Libraries 14. Problem-Solving Environments 15. Tools for Performance Tuning Debugging 16. The 2-D Poisson Problem IV. Enabling Algorithms 17....

10.5860/choice.41-0348 article EN Choice Reviews Online 2003-09-01

Dataset storage, exchange, and access play a critical role in scientific applications. For such purposes netCDF serves as portable, efficient file format programming interface, which is popular numerous application domains. However, the original interface does not provide an mechanism for parallel data storage access. In this work, we present new writing reading datasets. This derived with minimal changes from serial but defines semantics tailored high performance. The underlying I/O...

10.1145/1048935.1050189 article EN 2003-11-15

Over the past few years resilience has became a major issue for high-performance computing (HPC) systems, in particular perspective of large petascale systems and future exascale systems. These will typically gather from half million to several millions central processing unit (CPU) cores running up billion threads. From current knowledge observations existing it is anticipated that experience various kind faults many times per day. It also approach resilience, which relies on automatic or...

10.1177/1094342009347767 article EN The International Journal of High Performance Computing Applications 2009-09-17

We consider multiphysics applications from algorithmic and architectural perspectives, where “algorithmic” includes both mathematical analysis computational complexity, “architectural” software hardware environments. Many diverse can be reduced, en route to their simulation, a common algebraic coupling paradigm. Mathematical of in this form is not always practical for realistic applications, but model problems representative discussed herein provide insight. A variety frameworks have been...

10.1177/1094342012468181 article EN The International Journal of High Performance Computing Applications 2013-02-01

This paper presents an analytical model to predict the performance of

10.1145/1693453.1693470 article EN 2010-01-09

Resilience is a major roadblock for HPC executions on future exascale systems. These systems will typically gather millions of CPU cores running up to billion threads. Projections from current large and technology evolution predict errors happen in many times per day. propagate generate various kinds malfunctions, simple process crashes result corruptions.The past five years have seen extraordinary technical progress domains related resilience. Several options, initially considered...

10.14529/jsfi140101 article EN cc-by Supercomputing Frontiers and Innovations 2014-03-01
Yuri Alexeev Maximilian Amsler Marco Antonio Barroca Sanzio Bassini Torey Battelle and 95 more Daan Camps David Casanova Young Jay Choi Frederic T. Chong Charles Chung C.F. Codella Antonio Córcoles James Cruise Alberto Di Meglio I. Ďuran Thomas Eckl Sophia E. Economou Stephan Eidenbenz Bruce G. Elmegreen Clyde Fare Ismael Faro Cristina Sanz Fernández Rodrigo Neumann Barros Ferreira Keisuke Fuji Bryce Fuller Laura Gagliardi Giulia Galli Jennifer R. Glick Isacco Gobbi Pranav Gokhale Salvador de la Puente Gonzalez Johannes Greiner William Gropp Michele Grossi Emanuel Gull Burns Healy Matthew R. Hermes Benchen Huang Travis S. Humble Nobuyasu Ito Artur F. Izmaylov Ali Javadi-Abhari Douglas M. Jennewein Shantenu Jha Liang Jiang Barbara Jones Wibe A. de Jong Petar Jurcevic William Kirby Stefan Kister Masahiro Kitagawa Joel Klassen Katherine Klymko Kwangwon Koh Masaaki Kondo Dog̃a Murat Kürkçüog̃lu Krzysztof Kurowski Teodoro Laino Ryan Landfield Matt Leininger Vicente Leyton‐Ortega Ang Li Meifeng Lin Junyu Liu Nicolás Lorente André Luckow Simon Martiel Francisco Martín-Fernández Margaret Martonosi Claire Marvinney Arcesio Castaneda Medina Dirk Merten Antonio Mezzacapo Kristel Michielsen Abhishek Mitra Tushar Mittal Kyungsun Moon Joel E. Moore Sarah Mostame Mário Motta Young-Hye Na Yunseong Nam Prineha Narang Yu‐ya Ohnishi Diego Ottaviani Matthew Otten Scott Pakin V. R. Pascuzzi Edwin Pednault Tomasz Piontek Jed W. Pitera Patrick Rall Gokul Subramanian Ravi Niall Robertson Matteo A. C. Rossi Piotr Rydlichowski Hoon Ryu Ge. G. Samsonidze Mitsuhisa Sato Nishant Saurabh

10.1016/j.future.2024.04.060 article EN Future Generation Computer Systems 2024-05-31

Article On implementing MPI-IO portably and with high performance Share on Authors: Rajeev Thakur Mathematics Computer Science Division, Argonne National Laboratory, Argonne, IL ILView Profile , William Gropp Ewing Lusk Authors Info & Claims IOPADS '99: Proceedings of the sixth workshop I/O in parallel distributed systemsMay 1999 Pages 23–32https://doi.org/10.1145/301816.301826Published:01 May 256citation678DownloadsMetricsTotal Citations256Total Downloads678Last 12 Months17Last 6 weeks6 Get...

10.1145/301816.301826 article EN 1999-05-01
Coming Soon ...