- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Quantum Chromodynamics and Particle Interactions
- Particle Detector Development and Performance
- Dark Matter and Cosmic Phenomena
- Cosmology and Gravitation Theories
- Neutrino Physics Research
- Computational Physics and Python Applications
- Distributed and Parallel Computing Systems
- Advanced Data Storage Technologies
- Black Holes and Theoretical Physics
- Astrophysics and Cosmic Phenomena
- Medical Imaging Techniques and Applications
- Scientific Computing and Data Management
- Parallel Computing and Optimization Techniques
- Atomic and Subatomic Physics Research
- Particle Accelerators and Free-Electron Lasers
- Superconducting Materials and Applications
- Radiation Detection and Scintillator Technologies
- Gamma-ray bursts and supernovae
- Nuclear reactor physics and engineering
- Nuclear physics research studies
- International Science and Diplomacy
- Radiation Therapy and Dosimetry
- advanced mathematical theories
Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas
2016-2025
Unidades Centrales Científico-Técnicas
2020-2025
Port d'Informació Científica
2013-2024
Universidad Autónoma de Madrid
2014-2023
Universidad Complutense de Madrid
2023
Center for Migration Studies of New York
2021
Universitat de Barcelona
2010-2020
Universitat Autònoma de Barcelona
2013-2020
Institut de Recherche sur les Lois Fondamentales de l'Univers
2019
University of Belgrade
2015
This paper presents the design of LHCb trigger and its performance on data taken at LHC in 2011. A principal goal is to perform flavour physics measurements, designed distinguish charm beauty decays from light quark background. Using a combination lepton identification measurements particles' transverse momenta selects particles originating hadrons, which typically fly finite distance before decaying. The reduces roughly 11 MHz bunch-bunch crossings that contain least one inelastic pp...
Abstract Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently highest energy accelerator is LHC at CERN, in Geneva, Switzerland. Each its four major detectors, such as CMS detector, produces dozens Petabytes data per year be analyzed by a large international collaboration. The processing carried out on Worldwide Computing Grid, that spans over more than 170 compute centers around world and used number particle physics experiments....
High Energy Physics (HEP) experiments will enter a new era with the start of HL-LHC program, computing needs surpassing by large factors current capacities. Anticipating such scenario, funding agencies from participating countries are encouraging experimental collaborations to consider rapidly developing Performance Computing (HPC) international infrastructures satisfy at least fraction foreseen HEP processing demands. These HPC systems highly non-standard facilities, custom-built for use...
Efforts in distributed computing of the CMS experiment at LHC CERN are now focusing on functionality required to fulfill projected needs for HL-LHC era. Cloud and HPC resources expected be dominant relative provided by traditional Grid sites, being also much more diverse heterogeneous. Handling their special capabilities or limitations maintaining global flexibility efficiency, while operating scales higher than current capacity, major challenges addressed Submission Infrastructure team....
The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all workflows, including analysis, Monte Carlo production, detector data reprocessing activities. total resources at Tier-1 Tier-2 grid sites pledged to exceed 100,000 CPU cores, while another 50,000 cores are available opportunistically, pushing needs of Pool higher scales each year. These becoming more diverse in their accessibility configuration over time. Furthermore, challenge...
The successful exploitation of multicore processor architectures is a key element the LHC distributed computing system in coming era Run 2. High-pileup complex-collision events represent challenge for traditional sequential programming terms memory and processing time budget. CMS data production framework introducing parallel execution reconstruction simulation algorithms to overcome these limitations. plans execute jobs while still supporting singlecore other tasks difficult parallelize,...
Hundreds of physicists analyze data collected by the Compact Muon Solenoid (CMS) experiment at Large Hadron Collider using CMS Remote Analysis Builder and global pool to exploit resources Worldwide LHC Computing Grid. Efficient use such an extensive expensive resource is crucial. At same time, collaboration committed minimizing time insight for every scientist, pushing fewer possible access restrictions full sample supports free choice applications run on computing resources. Supporting...
Scheduling multi-core workflows in a global HTCondor pool is multi-dimensional problem whose solution depends on the requirements of job payloads, characteristics available resources, and boundary conditions such as fair share prioritization imposed matching to resources. Within context dedicated task force, CMS has increased significantly scheduling efficiency reusable pilots by various improvements limitations GlideinWMS pilots, accuracy resource requests, speed infrastructure, algorithms.
In view of the increasing computing needs for HL-LHC era, LHC experiments are exploring new ways to access, integrate and use non-Grid compute resources. Accessing making efficient Cloud High Performance Computing (HPC) resources present a diversity challenges CMS experiment. particular, network limitations at nodes in HPC centers prevent pilot jobs connect its central HTCondor pool order receive payload be executed. To cope with this limitation, features have been developed both resource...
After the successful first run of LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading increased volumes and event complexity. In order process generated such scenario exploit multicore architectures current CPUs, LHC experiments have developed parallelized software for reconstruction simulation. However, a good fraction their computing effort still expected be executed as single-core tasks. Therefore, jobs diverse resources requirements will...
In the present run of LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity Run 2 events requires parallelization code to reduce memory-per- core footprint constraining serial execution programs, thus optimizing exploitation multi-core architectures. allocation computing resources for tasks, however, becomes a complex problem in itself. workload submission infrastructure employs...
The CMS Submission Infrastructure Global Pool, built on Glidein-WMS andHTCondor, is a worldwide distributed dynamic pool responsible for the allocation of resources all computing workloads. Matching continuously increasing demand by requires anticipated assessment its scalability limitations. In addition, Plmust be able to expand in more heterogeneous environment, terms resource provisioning (combining Grid, HPC and Cloud) workload submissi.A dedicated testbed has been set up simulate such...
In the next years, processor architectures based on much larger numbers of cores will be most likely model to continue "Moore's Law" style throughput gains.This not only results in many more jobs parallel running LHC Run 1 era monolithic applications, but also memory requirements these processes push workernode limit.One solution is parallelizing application itself, through forking and sharing or threaded frameworks.CMS following all approaches has a comprehensive strategy schedule multicore...
CMS has developed a strategy to efficiently exploit the multicore architecture of compute resources accessible experiment. A coherent use multiple cores available in node yields substantial gains terms resource utilization. The implemented approach makes multithreading support event processing framework and scheduling capabilities provisioning system. Multicore slots are acquired provisioned by means pilot agents which internally schedule execute single payloads. multithreaded currently used...
The CMS experiment is working to integrate an increasing number of High Performance Computing (HPC) resources into its distributed computing infrastructure. case the Barcelona Supercomputing Center (BSC) particularly challenging as severe network restrictions prevent use standard solutions. CIEMAT group has performed significant work in order overcome these constraints and make BSC available CMS. developments include adapting workload management tools, replicating software repository...
The CMS Submission Infrastructure (SI) is the main computing resource provisioning system for workloads. A number of HTCondor pools are employed to manage this infrastructure, which aggregates geographically distributed resources from WLCG and other providers. Historically, model authentication among diverse components infrastructure has relied on Grid Security (GSI), based identities X509 certificates. In contrast, commonly used modern standards capabilities tokens. identified trend aims at...
In 2029 the LHC will start high-luminosity program, with a boost in integrated luminosity resulting an unprecedented amount of ex- perimental and simulated data samples to be transferred, processed stored disk tape systems across worldwide computing Grid. Content de- livery network solutions are being explored purposes improving performance compute tasks reading input via wide area network, also provide mechanism for cost-effective deployment lightweight storage supporting traditional or...
The increasingly larger data volumes that the LHC experiments will accumulate in coming years, especially High-Luminosity era, call for a paradigm shift way experimental datasets are accessed and analyzed. current model, based on reduction Grid infrastructure, followed by interactive analysis of manageable size samples physicists’ individual computers, be superseded adoption Analysis Facilities. This rapidly evolving concept is converging to include dedicated hardware infrastructures...
The former CMS Run 2 High Level Trigger (HLT) farm is one of the largest contributors to compute resources, providing about 25k job slots for offline computing. This CPU was initially employed as an opportunistic resource, exploited during inter-fill periods, in LHC 2. Since then, it has become a nearly transparent extension capacity at CERN, being located on-site interaction point 5 (P5), where detector installed. resource been configured support execution critical tasks, such prompt data...
The computing resource needs of LHC experiments are expected to continue growing significantly during the Run 3 and into HL-LHC era. landscape available resources will also evolve, as High Performance Computing (HPC) Cloud provide a comparable, or even dominant, fraction total compute capacity. future years present challenge for experiments' provisioning models, both in terms scalability increasing complexity. CMS Submission Infrastructure (SI) provisions workflows. This infrastructure is...
While the computing landscape supporting LHC experiments is currently dominated by x86 processors at WLCG sites, this configuration will evolve in coming years. collaborations be increasingly employing HPC and Cloud facilities to process vast amounts of data expected during Run 3 future HL-LHC phase. These often feature diverse compute resources, including alternative CPU architectures like ARM IBM Power, as well a variety GPU specifications. Using these heterogeneous resources efficiently...