- Advanced Data Storage Technologies
- Distributed and Parallel Computing Systems
- Cloud Computing and Resource Management
- Scientific Computing and Data Management
- Big Data and Business Intelligence
- 3D Modeling in Geospatial Applications
- Parallel Computing and Optimization Techniques
- Software System Performance and Reliability
- Interconnection Networks and Systems
- Remote Sensing and LiDAR Applications
- Caching and Content Delivery
- Opportunistic and Delay-Tolerant Networks
- Soil Geostatistics and Mapping
- Image Processing and 3D Reconstruction
- Big Data Technologies and Applications
- Semiconductor Lasers and Optical Devices
- IoT and Edge/Fog Computing
- Age of Information Optimization
- Network Security and Intrusion Detection
- Data Mining Algorithms and Applications
- Simulation Techniques and Applications
- Anomaly Detection Techniques and Applications
Office of Scientific and Technical Information
2019-2022
National Technical Information Service
2019-2022
Oak Ridge National Laboratory
2010-2020
Ames National Laboratory
2004
The Oak Ridge Leadership Computing Facility (OLCF) has deployed multiple large-scale parallel file systems (PFS) to support its operations. During this process, OLCF acquired significant expertise in storage system design, software development, technology evaluation, benchmarking, procurement, deployment, and operational practices. Based on the lessons learned from each new PFS improved operating procedures, strategies. This paper provides an account of our experience acquiring, deploying,...
The Oak Ridge Leadership Computing Facility (OLCF) is a leader in large-scale parallel file system development, design, deployment and continuous operation. For the last decade, OLCF has designed deployed two large center-wide systems. first instantiation, Spider 1, served Jaguar supercomputer its predecessor, 2, now serves Titan supercomputer, among many other computational resources. been rigorously collecting storage statistics from these systems since their transition to production state.
Ceph is an emerging open-source parallel distributed file and storage system. By design, leverages unreliable commodity network hardware, provides reliability fault-tolerance via controlled object placement data replication. This paper presents our block I/O performance scalability evaluation of for scientific high-performance computing (HPC) environments. Our work makes two unique contributions. First, performed under a realistic setup large-scale capability HPC environment using commercial...
The importance of computing facilities is heralded every six months with the announcement new Top500 list, showcasing world's fastest supercomputers. Unfortunately, great capability does not come long-term data storage capacity, which often means users must move their to local site archive, remote sites where they may be doing future computation or analysis, back home institution, else face dreaded purge that most HPC centers employ keep utilization large parallel filesystems low manage...
The movement of large-scale (tens Terabytes and larger) data sets between high performance computing (HPC) facilities is an important increasingly critical capability. A growing number scientific collaborations rely on HPC for tasks which either require as input or produce output. In order to enable the transfer these needed by community, must design deploy appropriate capabilities allow users do placement at scale. This paper describes Petascale DTN Project, effort undertaken four...