- Distributed and Parallel Computing Systems
- Advanced Chemical Physics Studies
- Scientific Computing and Data Management
- Cloud Computing and Resource Management
- Spectroscopy and Quantum Chemical Studies
- Protein Structure and Dynamics
- Atmospheric Ozone and Climate
- Advanced Data Storage Technologies
- Tensor decomposition and applications
- Advanced NMR Techniques and Applications
- Photoreceptor and optogenetics research
- Advanced Fluorescence Microscopy Techniques
- Lipid Membrane Structure and Behavior
- Various Chemistry Research Topics
- Distributed systems and fault tolerance
- Physics of Superconductivity and Magnetism
- Enzyme Structure and Function
- Advanced Condensed Matter Physics
- Parallel Computing and Optimization Techniques
- Quantum, superfluid, helium dynamics
- Nanopore and Nanochannel Transport Studies
- Machine Learning in Materials Science
- Atmospheric and Environmental Gas Dynamics
- Algebraic structures and combinatorial models
- Molecular Junctions and Nanostructures
IBM Research - Thomas J. Watson Research Center
2019-2023
IBM (United States)
2020-2022
Lawrence Livermore National Laboratory
2020
SLAC National Accelerator Laboratory
2013-2017
Stanford University
2013-2017
Ames Research Center
2016
College of Saint Benedict and Saint John's University
2011
Abstract TeraChem was born in 2008 with the goal of providing fast on‐the‐fly electronic structure calculations to facilitate ab initio molecular dynamics studies large biochemical systems such as photoswitchable proteins and multichromophoric antenna complexes. Originally developed for videogaming applications, graphics processing units (GPUs) offered a low‐cost parallel computer architecture that became more accessible general‐purpose GPU computing release CUDA 2007. The evaluation...
Developed over the past decade, TeraChem is an electronic structure and ab initio molecular dynamics software package designed from ground up to leverage graphics processing units (GPUs) perform large-scale excited state quantum chemistry calculations in gas condensed phase. TeraChem's speed stems reformulation of conventional theories terms a set individually optimized high-performance operations (e.g., Coulomb exchange matrix builds, one- two-particle density builds) rank-reduction...
The Frenkel exciton model is a useful tool for theoretical studies of multichromophore systems. We recently showed that the could be used to coarse-grain electronic structure in multichromophoric systems, focusing on singly excited states [ Acc. Chem. Res. 2014 , 47 2857 - 2866 ]. However, our previous implementation excluded charge-transfer states, which can play an important role light-harvesting systems and near-infrared optoelectronic materials. Recent have also emphasized significance...
Significance Here we present an unprecedented multiscale simulation platform that enables modeling, hypothesis generation, and discovery across biologically relevant length time scales to predict mechanisms can be tested experimentally. We demonstrate our predictive simulation-experimental validation loop generates accurate insights into RAS-membrane biology. Evaluating over 100,000 correlated simulations, show RAS–lipid interactions are dynamic evolving, resulting in: 1) a reordering...
The second-order approximate coupled cluster singles and doubles method (CC2) is a valuable tool in electronic structure theory. Although the density fitting approximation has been successful extending CC2 to larger molecules, it cannot address steep \documentclass[12pt]{minimal}\begin{document}$\mathcal {O}(N^5)$\end{document}O(N5) scaling with number of basis functions, N. Here, we introduce tensor hypercontraction (THC) (THC-CC2), which reduces {O}(N^4)$\end{document}O(N4) storage...
We apply orbital-weighted least-squares tensor hypercontraction decomposition of the electron repulsion integrals to accelerate coupled cluster singles and doubles (CCSD) method. Using accurate flexible low-rank factorizations integral tensor, we are able reduce scaling most vexing particle-particle ladder term in CCSD from O(N6) O(N5), with remarkably low error. Combined a T1-transformed Hamiltonian, this leads substantial practical accelerations against an optimized density-fitted implementation.
The tensor hypercontraction (THC) formalism is applied to equation-of-motion second-order approximate coupled cluster singles and doubles (EOM-CC2). resulting method, THC-EOM-CC2, shown scale as O(N4), a reduction of one order from the formal O(N5) scaling conventional EOM-CC2. Numerical tests for variety molecules show that errors less than 0.02 eV are introduced into excitation energies.
We have recently introduced the tensor hypercontraction (THC) method for electronic structure, including MP2. Here, we present an algorithm THC-MP2 that lowers memory requirements as well prefactor while maintaining formal quartic scaling demonstrated previously. also describe a procedure to optimize quadrature grids used in grid-based least-squares (LS) THC-MP2. apply this generate first-row atoms with less than 100 points/atom incurring negligible errors computed energies. benchmark...
Computational models can define the functional dynamics of complex systems in exceptional detail. However, many modeling studies face seemingly incommensurate requirements: to gain meaningful insights into some phenomena requires with high resolution (microscopic) detail that must nevertheless evolve over large (macroscopic) length- and time-scales. Multiscale has become increasingly important bridge this gap. Executing multiscale on current petascale computers levels parallelism...
We have implemented the Martini force field within Lawrence Livermore National Laboratory’s molecular dynamics program, ddcMD. The program is extended to a heterogeneous programming model so that it can exploit graphics processing unit (GPU) accelerators. In addition being ported GPU, entire integration step, including thermostat, barostat, and constraint solver, as well, which speeds up simulations 278-fold using one GPU vs central (CPU) core. A benchmark study performed with several test...
The advancement of machine learning techniques and the heterogeneous architectures most current supercomputers are propelling demand for large multiscale simulations that can automatically autonomously couple diverse components map them to relevant resources solve complex problems at multiple scales. Nevertheless, despite recent progress in workflow technologies, capabilities limited coupling two In first-ever demonstration using three scales resolution, we present a scalable generalizable...
The azirinyl cation (C2H2N(+)) and its geometrical isomers could be present in the interstellar medium. C2H2N(+) are, however, difficult to identify chemistry because of lack high-resolution spectroscopic data from laboratory experiments. Ab initio quantum chemical methods were used characterize structures, relative energies, physical properties low energy cation. We have employed second-order Møller-Plesset perturbation theory (MP2), Z-averaged (ZAPT2), coupled cluster with singles doubles...
The introduction of heterogeneous computing via GPUs from the Sierra architecture represented a significant shift in direction for computational science at Lawrence Livermore National Laboratory (LLNL), and therefore required preparation. Over last five years, Center Excellence (CoE) has brought employees with specific expertise IBM NVIDIA together LLNL concentrated effort to prepare applications, system software, tools supercomputer. This article shares process we applied CoE documents...
Abstract RAS is a signaling protein associated with the cell membrane that mutated in 30% of human cancers. has been proposed to be regulated by dynamic heterogeneity membrane. Investigating such mechanism requires near-atomic detail at macroscopic temporal and spatial scales, which not possible conventional computational or experimental techniques. We demonstrate here multiscale simulation infrastructure uses machine learning create scale-bridging ensemble over 100,000 simulations active...
Complex problems can often only be solved with a workflow of different applications whose progress depends also on the need shared data. There are challenges that developers must overcome when sharing data in workflow; each application may have very requirements how they access and consume For example, online or offline, sizes types vary as well frequency consumed produced by workflow. Moreover, producers consumers not run at same time scale, thus introducing possible latency. Also,...
Productivity from day one on supercomputers that leverage new technologies requires significant preparation. An institution procures a novel system architecture often lacks sufficient institutional knowledge and skills to prepare for it. Thus, the "Center of Excellence" (CoE) concept has emerged systems such as Summit Sierra, currently top two in Top 500. This paper documents CoE experiences prepared workload diverse applications math libraries heterogeneous system. We describe our approach...