Mehmet Belgin
- Pulsars and Gravitational Waves Research
- Cloud Computing and Resource Management
- Distributed and Parallel Computing Systems
- Gamma-ray bursts and supernovae
- Matrix Theory and Algorithms
- Scientific Computing and Data Management
- Parallel Computing and Optimization Techniques
- Geophysics and Gravity Measurements
- Numerical Methods and Algorithms
- Advanced Data Storage Technologies
- Interconnection Networks and Systems
- Software System Performance and Reliability
- Cosmology and Gravitation Theories
- Astrophysical Phenomena and Observations
- Astrophysics and Cosmic Phenomena
- Spacecraft and Cryogenic Technologies
- Seismic Imaging and Inversion Techniques
- Neutrino Physics Research
- earthquake and tectonic studies
- Digital Filter Design and Implementation
- Tensor decomposition and applications
- Data Quality and Management
- Peer-to-Peer Network Technologies
- Data Stream Mining Techniques
- High-pressure geophysics and materials
Georgia Institute of Technology
2016-2022
Atlanta Technical College
2018-2019
Virginia Tech
2007-2010
A wide variety of astrophysical and cosmological sources are expected to contribute a stochastic gravitational-wave background. Following the observations GW150914 GW151226, rate mass coalescing binary black holes appear be greater than many previous expectations. As result, background from unresolved compact coalescences is particularly loud. We perform search for isotropic using data Advanced LIGO's first observing run. The display no evidence signal. constrain dimensionless energy density...
We present the result of searches for gravitational waves from 200 pulsars using data first observing run Advanced LIGO detectors. find no significant evidence a gravitational-wave signal any these pulsars, but we are able to set most constraining upper limits yet on their amplitudes and ellipticities. For eight our give bounds that improvements over indirect spin-down limit values. another 32, within factor 10 limit, it is likely some will be reachable in future runs advanced detector....
Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These differ from each other in their treatment spins, and all make some simplifying assumptions, notably to neglect sub-dominant harmonic modes orbital eccentricity. Furthermore, while the are calibrated agree with waveforms by full numerical solutions Einstein's equations, any such calibration is accurate only non-zero tolerance limited...
We employ gravitational-wave radiometry to map the stochastic gravitational wave background expected from a variety of contributing mechanisms and test assumption isotropy using data Advanced Laser Interferometer Gravitational Wave Observatory's (aLIGO) first observing run. also search for persistent waves point sources with only minimal assumptions over 20--1726 Hz frequency band. Finding no evidence either or background, we set limits at 90% confidence. For broadband sources, report upper...
We present the results from an all-sky search for short-duration gravitational waves in data of first run Advanced LIGO detectors between September 2015 and January 2016. The algorithms use minimal assumptions on signal morphology, so they are sensitive to a wide range sources emitting waves. analyses target transient signals with duration ranging milliseconds seconds over frequency band 32 4096 Hz. observed gravitational-wave event, GW150914, has been detected high confidence this search;...
We present the results of search for gravitational waves (GWs) associated with $\gamma$-ray bursts detected during first observing run Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO). find no evidence a GW signal any 41 which LIGO data are available sufficient duration. For all bursts, we place lower bounds on distance to source using optimistic assumption that GWs an energy $10^{-2}M_\odot c^2$ were emitted within $16$-$500\,$Hz band, and median 90% confidence limit...
Pattern-based Representation (PBR) is a novel approach to improving the performance of Sparse Matrix-Vector Multiply (SMVM) numerical kernels. Motivated by our observation that many matrices can be divided into blocks share small number distinct patterns, we generate custom multiplication kernels for frequently recurring block patterns. The resulting reduction in index overhead significantly reduces memory bandwidth requirements and improves performance. Unlike existing methods, PBR requires...
We present the results of a search for long-duration gravitational wave transients in data LIGO Hanford and Livingston second generation detectors between September 2015 January 2016, with total observational time 49 days. The targets \unit[10 -- 500]{s} duration frequency band \unit[24 2048]{Hz}, minimal assumptions about signal waveform, polarization, source direction, or occurrence. No significant events were observed. %All candidate triggers consistent expected background, As result we...
Research computing centers provide a wide variety of services including large-scale resources, data storage, high-speed interconnect and scientific software repositories to facilitate continuous competitive research. Efficient management these complex resources services, as well ensuring their fair use by large number researchers from different domains are key center's success. Almost all research monitoring based on real time gathered systems but often lack tools perform deeper analysis...
Data Science research has a long history in academia which spans from large-scale data management, to mining and analysis using technologies database management systems (DBMS's). While traditional HPC offers tools on leveraging existing with processing needs, the large volume of speed generation pose significant challenges. Using Hadoop platform built top it drew immense interest after gained success industry. Georgia Institute Technology received donation 200 compute nodes Yahoo. Turning...
Most HPC centers face a lack of expertise data center migrations, as it's rare event that only small portion professionals experience more than once in their entire professional careers. This paper presents how the Georgia Institute Technology (Georgia Tech) Partnership for an Advanced Computing Environment (PACE) team employed automation to migrate research computing from old Rich (Rich) new Coda (Coda) 2020. PACE successfully migrated 1844 TB 3550 users without loss user data. implemented...
Pattern-based representation (PBR) is a novel sparse matrix that reduces the index overhead for many matrices without zero-filling and requiring identification of dense blocks. The PBR analyzer identifies recurring block nonzero patterns, represents submatrix consisting all blocks this pattern in coordinate format, generates custom matrix-vector multiplication kernels submatrix. In way, expresses structure terms specialized inner loops, thereby creating locality repeating via instruction...
Georgia Tech's "Partnership for an Advanced Computing Environment (PACE)" team provides research computing resources, support and consultation the entire campus external collaborators. In response to a campus-wide demand, PACE initiated project called "Instructional Cluster (ICE)" build instructional clusters educational efforts. These resources offer environment that's identical production clusters, expected provide thousands of grad/undergrad students with ample opportunities gain...
Abstract This article presents a benchmarking framework, namely “ProvBench,” with specific focus on provenance of collected data, capable identifying and measuring the impact changes to hardware, operating system, software, middleware, services that constitute highly complex heterogeneous research computing environment. The is retained via detailed automated recording hardware details, runtime environment, software libraries used, input data results, as well execution logs computation....
Originating from partnerships formed by central IT and researchers supporting their own clusters, the traditional condominium dedicated cluster models for research computing are appealing prevalent among emerging centers throughout academia. In 2008, Georgia Institute of Technology (GT) launched a campus strategy to centralize hosting resources across multiple science engineering disciplines under group expert support personnel, in 2009 Partnership an Advanced Computing Environment (PACE)...
Iterative solutions of sparse problems often achieve only a small fraction the peak theoretical performance on modern architectures. This problem is highly challenging because matrix storage schemes require data to be accessed irregularly, which leads massive cache misses. Furthermore, inner loop typical operations accesses and variable amount data, not low utilization floating point registers, but also prevents optimization techniques that improve instruction level parallelism (ILP), such...
We discuss training workshops run by the Linux Clusters Institute (LCI), which provides education and advanced technical for IT professionals who deploy support High Performance Computing (HPC) clusters, have become most ubiquitous tools HPC worldwide. The LCI offers that cover basics of cluster system administration, including hardware (computing, storage, networking); system-level software (e.g., provisioning systems, resource ma nagers, job schedulers); security; user support. These also...
Summary We describe a sustainable strategy to support large number of researchers with widely varying scientific software needs, which is common problem for most centralized Research Computing Centers on university campuses. Changes in systems and hardware, coupled aging software, often necessitates re‐compilation existing software. The naive approach re‐compiling all the packages not only counterproductive but may also become unrealistic, especially small teams such as Georgia Tech's PACE...
Transitional data are a common component of most large-scale simulations and analysis. Most research computing centers provide scratch storage to keep temporary needed only during the runtime jobs. Efficient management becomes critical for HPC with limited resources. Different employ various policies approaches sustain tricky balance between filesystem capabilities, user expectations, excellence in customer support. In this paper, we present homegrown fully-automated cleanup tool, along...
Open Science Grid (OSG) is a consortium that enables many scientific breakthroughs by providing researchers with access to shared High Throughput Computing (HTC) compute clusters in support of large-scale collaborative research. To meet the demand on campus, Georgia Institute Technology (GT)'s Partnership for an Advanced Environment (PACE) team launched centralized OSG project, powered Buzzard, NSF-funded cluster. We describe Buzzard's unique multi-tenant architecture, which supports...
Sparse matrix operations achieve only small fractions of peak CPU speeds because the use specialized, index-based representations, which degrade cache utilization by imposing irregular memory accesses and increasing number overall accesses. Compounding problem, floating-point in a single sparse iteration leads to low pipeline utilization. Operation stacking addresses these problems for large ensemble computations that solve multiple systems linear equations with identical sparsity structure....