P. Nilsson
- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Particle Detector Development and Performance
- Quantum Chromodynamics and Particle Interactions
- Dark Matter and Cosmic Phenomena
- Computational Physics and Python Applications
- Cosmology and Gravitation Theories
- Distributed and Parallel Computing Systems
- Neutrino Physics Research
- Advanced Data Storage Technologies
- Radiation Detection and Scintillator Technologies
- Scientific Computing and Data Management
- Nuclear reactor physics and engineering
- Medical Imaging Techniques and Applications
- Astrophysics and Cosmic Phenomena
- Black Holes and Theoretical Physics
- Big Data Technologies and Applications
- advanced mathematical theories
- CCD and CMOS Imaging Sensors
- Atomic and Subatomic Physics Research
- Cloud Computing and Resource Management
- Radiation Effects in Electronics
- Particle accelerators and beam dynamics
- Advancements in PLL and VCO Technologies
- Superconducting Materials and Applications
Brookhaven National Laboratory
2016-2025
Brandeis University
2023-2024
A. Alikhanyan National Laboratory
2024
Atlas Scientific (United States)
2024
The University of Adelaide
2014-2023
University of Birmingham
2019-2023
University of Toronto
2023
The University of Texas at Arlington
2008-2020
University of Bologna
2015
Osservatorio astronomico di Bologna
2015
The ATLAS experiment at CERN is one of the largest scientific machines built to date and will have ever growing computing needs as Large Hadron Collider collects an increasingly larger volume data over next 20 years. conducting R&D projects on Amazon Web Services Google Cloud complementary resources for distributed computing, focusing some key features commercial clouds: lightweight operation, elasticity availability multiple chip architectures. proof concept phases concluded with...
This paper presents a novel approach to the joint optimization of job scheduling and data allocation in grid computing environments. We formulate this problem as mixed integer quadratically constrained program. To tackle nonlinearity constraint, we alternatively fix subset decision variables optimize remaining ones via Mixed Integer Linear Programming (MILP). solve MILP at each iteration an off-the-shelf solver. Our experimental results show that our method significantly outperforms existing...
The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production analysis requirements for a data-driven workload management capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by experiment are distributed worldwide hundreds sites, thousands physicists analyse remotely, volume processed is beyond exabyte scale, dozens scientific applications supported, while requires more than few billion hours...
Two-particle correlations of direct photons were measured in central 208Pb+208Pb collisions at 158A GeV. The invariant interferometric radii extracted for 100<K(T)<300 MeV/c and compared to from charged pion correlations. yield soft photons, K(T)<300 MeV/c, was the correlation strength theoretical calculations.
Event-by-event fluctuations in the multiplicities of charged particles and photons, total transverse energy 158AGeV Pb+Pb collisions are studied for a wide range centralities. For narrow centrality bins multiplicity distributions found to be near perfect Gaussians. The effect detector acceptance on has been demonstrated follow statistical considerations. dependence particle measured data agree reasonably well with those obtained from participant model. However, photons have lower compared...
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments LHC explore fundamental nature of matter and basic forces that shape our universe, were recently credited for discovery a Higgs boson. ATLAS ALICE are largest collaborations ever assembled sciences forefront research LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on...
The ATLAS Event Service (ES) implements a new fine grained approach to HEP event processing, designed be agile and efficient in exploiting transient, short-lived resources such as HPC hole-filling, spot market commercial clouds, volunteer computing. Input output control data flows, bookkeeping, monitoring, storage are all managed at the level an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours/day). flows utilize remote repositories with...
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide hundreds of sites, thousands physicists analyse data remotely, volume processed is beyond exabyte scale, while processing requires more than a few billion hours usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet scale complexity LHC for ATLAS experiment. In process, old batch job paradigm locally managed in HEP...
Several hadronic observables have been studied in central $158A \mathrm{GeV}$ Pb+Pb collisions using data measured by the WA98 experiment at CERN: single ${\ensuremath{\pi}}^{\ensuremath{-}}$ and ${K}^{\ensuremath{-}}$ production, as well two- three-pion interferometry. The Wiedemann-Heinz hydrodynamical model has fitted to pion spectrum, giving an estimate of temperature transverse flow velocity. Bose-Einstein correlations between two identified analyzed a function ${k}_{T},$ different...
Neutral pion transverse momentum spectra were measured in p+C and p+Pb collisions at sqrt[S{NN}]=17.4 GeV midrapidity (2.3 less than or approximately equal eta{lab} 3.0) over the range 0.7 p{T} 3.5 GeV/c. The are compared to pi{0} Pb+Pb sqrt[S{NN}]=17.3 same experiment. For a wide of centralities (N{part} 300), yield pi{0}'s with greater 2 GeV/c is larger consistent yields scaled number nucleon-nucleon (N{coll}), while for central N{part}greater 350, suppressed.
The Vera C. Rubin Observatory will produce an unprecedented astronomical data set for studies of the deep and dynamic universe. Its Legacy Survey Space Time (LSST) image entire southern sky every three to four days tens petabytes raw associated calibration over course experiment’s run. More than 20 terabytes must be stored night, annual campaigns reprocess dataset since beginning survey conducted ten years. Production Distributed Analysis (PanDA) system was evaluated by Data Management team...
Machine Learning (ML) has become one of the important tools for High Energy Physics analysis. As size dataset increases at Large Hadron Collider (LHC), and same time search spaces bigger in order to exploit physics potentials, more computing resources are required processing these ML tasks. In addition, complex advanced workflows developed which task may depend on results previous How make use vast distributed CPUs/GPUs WLCG big tasks a popular research area. this paper, we present our...
High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which dynamically efficiently fill any scheduling the resource presents benefit both facility (maximal utilization) (compute-limited) science. ATLAS Yoda system provides this capability to HEP-like by implementing event-level an...
Results on the study of localized fluctuations in multiplicity charged particles and photons produced 158AGeV/c Pb+Pb collisions are presented for varying centralities. The versus neutral particle correlations common phase space regions azimuthal sizes analyzed by two different methods. Various types mixed events constructed to probe arising from sources. measured results compared those simulations events. comparison indicates presence nonstatistical both photon multiplicities limited...
The Large Hadron Collider(LHC) is the world's largest and most powerful machine. It started operating in 2009 with a scientific program foreseen to extend over next coming decades at increasing energies luminosities maximise discovery potential. During Run1 (2009- 2013), Worldwide LHC Computing Grid (WLCG) successfully delivered all necessary computing resources, which made of Higgs Boson possible. Looking ahead, it forecasted that increased will extrapolate multiplicity storage processing...