- Distributed and Parallel Computing Systems
- Cloud Computing and Resource Management
- Advanced Data Storage Technologies
- Scientific Computing and Data Management
- Energy, Environment, and Transportation Policies
- Parallel Computing and Optimization Techniques
- African history and culture analysis
- Political theory and Gramsci
- Social Media and Politics
- Software System Performance and Reliability
- Child Nutrition and Water Access
- Impact of Technology on Adolescents
- Embedded Systems Design Techniques
- Hybrid Renewable Energy Systems
- Distributed systems and fault tolerance
- Research Data Management Practices
- Iron Metabolism and Disorders
- Cloud Data Security Solutions
- Particle Detector Development and Performance
- Advancements in Photolithography Techniques
- Romani and Gypsy Studies
- Religious Education and Schools
- Global Education and Multiculturalism
- Energy and Environment Impacts
- Misinformation and Its Impacts
Texas Advanced Computing Center
2021-2023
Meta (Israel)
2021
Purdue University West Lafayette
2010-2020
Cornell University
2019
University of California, Berkeley
2019
Fordham University
2019
Office of the United Nations High Commissioner for Refugees
2019
Los Alamos National Laboratory
2018
VPIC is a general purpose particle-in-cell simulation code for modeling plasma phenomena such as magnetic reconnection, fusion, solar weather, and laser-plasma interaction in three dimensions using large numbers of particles. VPIC's capacity both fidelity scale makes it particularly well-suited research on pre-exascale exascale platforms. In this article, we demonstrate the unique challenges involved preparing operation at exascale, outlining important optimizations to make efficient...
The increasing availability of commercial cloud computing resources in recent years has caught the attention high-performance (HPC) and scientific community. Many researchers have subsequently examined relative computational performance commercially available offerings across a number HPC application bench-marks workflows, but analogous cost comparisons-i.e., comparisons between doing computation traditional environments vs. environments-are less frequently discussed are difficult to make...
Exascale computing brings with it diverse machine architectures and programming approaches which challenge application developers. Applications need to perform well on a wide range of while simultaneously minimizing development maintenance overheads. In order alleviate these costs, developers have begun leveraging portability frameworks maximize both the code shared between platforms performance application. We explore effectiveness several such through applying them small production codes....
National labs, academic institutions and industry have a strong need for scientists staff that understand high performance computing (HPC) the complex interconnections across individual topics in HPC. However, domain science computer undergraduate programs are not providing sufficient educational resources, far from conveying interdisciplinary collaborative nature of HPC environment. The Student Cluster Competition (SCC) was created as an tool to immerse undergraduates It is microcosm...
This paper aims to describe methods that can be used create new Student Cluster Competition teams from the standpoint of team advisors. The purpose is share these in order an easier path for organizing a successful team. These were gleaned survey advisors have formed last four years. Four responded and those responses fit into five categories: (1) early preparation, (2) coursework specific competition, (3) close relationships with hardware vendors, (4) concentration on applications over...
To teach the computational science necessary to prepare STEM students for positions in both research and industry, faculty need HPC resources specifically tailored their classrooms. Scholar was developed as a large-scale computing tool that can use classrooms well scientific principles experimentation. In this paper, we discuss pedagogical campus-wide teaching resource outline how such implemented at Purdue University.
Modern I/O applications that run on HPC infrastructures are increasingly becoming read and metadata intensive. However, having multiple submitting large amounts of operations can easily saturate the shared parallel file system's resources, leading to overall performance degradation unfairness. We present PADLL, an application system agnostic storage middleware enables QoS control data workflows in systems. It adopts ideas from Software-Defined Storage, building plane stages mediate rate...
Node downtime and failed jobs in a computing cluster translate into wasted resources user dissatisfaction. Therefore understanding why nodes fail HPC clusters is essential. This paper provides analyses of node job failures two university-wide at Tier I US research universities. We analyzed approximately 3.0M execution data System A 2.2M B with sources coming from accounting logs, resource usage for all primary local remote (memory, IO, network), failure data. observe different kinds...
Understanding total cost of ownership (TCO) for HPC systems on a site-specific range applications is critical component any RFP or capacity plan at research computing centers. With the increasing industry use in cloud, understanding and comparing costs on-premises resources cloud-based options more important than ever. In this paper we present methods to determine mix, calculating TCO cloud resources, comparison two. Additionally cost-analysis Purdue's Community Cluster program with current...
With this special section we bring you a practice and experience effort in transparency reproducibility for large-scale computational science. A unique section, it consists of research work plus six critques, each by student team that reproduced the work. The original has been expanded its science also contribution to open with discussion effort. Our letter contemplates implications as well.
High-performance computing is used by research institutions worldwide for managing data, modeling, simulation, and big data analysis. This capability critical higher education to attract top researchers faculty, maximizing competitiveness future funding, training students in current data-intensive methods.
To teach the computational science necessary to prepare STEM students for positions in both research and industry, faculty need HPC resources specifically tailored their classrooms. Scholar was developed as a large-scale computing tool that can use classrooms well scientific principles experimentation. In this paper, we discuss pedagogical campus-wide teaching resource outline how such implemented at Purdue University.
UT Austin-Portugal Program, a collaboration between the Portuguese Foundation of Science and Technology University Texas at Austin, award UTA18-001217
Large community clusters are becoming increasingly common in universities and other organizations due to the benefits they provide researchers terms of operational costs resource availability. However, efficient administration, failure diagnosis, performance debugging on challenging tasks sheer diversity workloads users. These typically shared by users coming from various scientific domains experience levels. Many have little computing and, hence, often face issues—leading wastage. In this...
There is a shortage of training programs for research cyber-facilitators and the need only growing, especially in academia. This paper will discuss importance developing workforce at undergraduate level, creating formal program mentoring undergraduates Research Computing Purdue University, how approach to has evolved. The hands-on changed from one with students working as junior HPC administrators, performing hardware break-fix relative vacuum, closely their mentors, building real-world...
Many members of the current generation students and researchers are accustomed to intuitive computing devices never had learn how use command-line based systems, which comprise majority high-performance environments in use. In 2013-14 time frame, both Indiana University Purdue university separately launched virtual desktop front-ends for their high performance clusters with aim offering an easier on-ramp new users. last five years we iterated on refined these approaches, now have over two...
In this special section we bring you a practice and experience effort in reproducibility for large-scale computational science at SC20. This includes nine critiques, each by student team that reproduced results from paper published SC19, during the following year's Student Cluster Competition. The is also included has been expanded upon, now including an analysis of outcomes students' experiments. Lastly, encapsulates variety advances SC conference series technical program.