- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Quantum Chromodynamics and Particle Interactions
- Particle Detector Development and Performance
- Dark Matter and Cosmic Phenomena
- Cosmology and Gravitation Theories
- Computational Physics and Python Applications
- Neutrino Physics Research
- Astrophysics and Cosmic Phenomena
- Distributed and Parallel Computing Systems
- Black Holes and Theoretical Physics
- Advanced Data Storage Technologies
- Medical Imaging Techniques and Applications
- Scientific Computing and Data Management
- Particle Accelerators and Free-Electron Lasers
- Nuclear reactor physics and engineering
- Gamma-ray bursts and supernovae
- Atomic and Subatomic Physics Research
- Parallel Computing and Optimization Techniques
- Radiation Detection and Scintillator Technologies
- Radiation Therapy and Dosimetry
- Peer-to-Peer Network Technologies
- Optical properties and cooling technologies in crystalline materials
- Nuclear physics research studies
- International Science and Diplomacy
Unidades Centrales Científico-Técnicas
2008-2025
Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas
2016-2025
Universidad Complutense de Madrid
2008-2023
Universidad Autónoma de Madrid
2014-2023
University of Alberta
2022
Center for Migration Studies of New York
2021
University of Belgrade
2012-2015
Fermi National Accelerator Laboratory
2015
European Organization for Nuclear Research
2012-2014
National Taiwan University
2014
Particle physics has an ambitious and broad experimental programme for the coming decades. This requires large investments in detector hardware, either to build new facilities experiments, or upgrade existing ones. Similarly, it commensurate investment R&D of software acquire, manage, process, analyse shear amounts data be recorded. In planning HL-LHC particular, is critical that all collaborating stakeholders agree on goals priorities, efforts complement each other. this spirit, white paper...
A one-dimensional model to study the heat and mass transfer inside immediately above a wet counter-flow cooling tower is described. The thermodynamic assigns zone-specific Merkel numbers each of rain, fill, spray zones, it includes an atmospheric plume model. Using present formulation, zone-by-zone rates rejection water evaporation can be estimated, as visible height. validated against well-established Poppe methods well select field data. Cooling performance visibility are evaluated under...
Complex scientific workflows can process large amounts of data using thousands tasks. The turnaround times these are often affected by various latencies such as the resource discovery, scheduling and access for individual workflow processes or actors. Minimizing will improve overall execution time a thus lead to more efficient robust processing environment. In this paper, we propose pilot job concept that has intelligent reuse strategies minimize scheduling, queuing, latencies. results have...
Abstract Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently highest energy accelerator is LHC at CERN, in Geneva, Switzerland. Each its four major detectors, such as CMS detector, produces dozens Petabytes data per year be analyzed by a large international collaboration. The processing carried out on Worldwide Computing Grid, that spans over more than 170 compute centers around world and used number particle physics experiments....
Several proposals exist that try to enhance Distributed Hash Table (DHT) systems with broadcasting capabilities. None of them however specifically addresses the particularities Kademlia, an important DHT, used in well known real applications. Our work analyzes implications Kademlia's use XOR-based distance metrics and subsequently discusses applicability existing it. Based on this, several algorithms for Kademlia have been implemented experimentally evaluated under different conditions churn...
Pull-based late-binding overlays are used in some of today's largest computational grids. Job agents submitted to resources with the duty retrieving real workload from a central queue at runtime. This helps overcome problems these complex environments: heterogeneity, imprecise status information and relatively high failure rates. In addition, late job assignment allows dynamic adaptation changes grid conditions or user priorities. However, as scale grows, may become bottleneck for whole...
Grid infrastructures constitute nowadays the core of computing facilities biggest LHC experiments. These experiments produce and manage petabytes data per year run thousands jobs every day to process that data. It is duty metaschedulers allocate tasks most appropriate resources at proper time.
In view of the increasing computing needs for HL-LHC era, LHC experiments are exploring new ways to access, integrate and use non-Grid compute resources. Accessing making efficient Cloud High Performance Computing (HPC) resources present a diversity challenges CMS experiment. particular, network limitations at nodes in HPC centers prevent pilot jobs connect its central HTCondor pool order receive payload be executed. To cope with this limitation, features have been developed both resource...
The CMS experiment is working to integrate an increasing number of High Performance Computing (HPC) resources into its distributed computing infrastructure. case the Barcelona Supercomputing Center (BSC) particularly challenging as severe network restrictions prevent use standard solutions. CIEMAT group has performed significant work in order overcome these constraints and make BSC available CMS. developments include adapting workload management tools, replicating software repository...
In 2029 the LHC will start high-luminosity program, with a boost in integrated luminosity resulting an unprecedented amount of ex- perimental and simulated data samples to be transferred, processed stored disk tape systems across worldwide computing Grid. Content de- livery network solutions are being explored purposes improving performance compute tasks reading input via wide area network, also provide mechanism for cost-effective deployment lightweight storage supporting traditional or...
The increasingly larger data volumes that the LHC experiments will accumulate in coming years, especially High-Luminosity era, call for a paradigm shift way experimental datasets are accessed and analyzed. current model, based on reduction Grid infrastructure, followed by interactive analysis of manageable size samples physicists’ individual computers, be superseded adoption Analysis Facilities. This rapidly evolving concept is converging to include dedicated hardware infrastructures...
The current computing models from LHC experiments indicate that much larger resource increases would be required by the HL-LHC era (2026+) than those technology evolution at a constant budget could bring. Since worldwide for is not expected to increase, many research activities have emerged improve performance of processing software applications, as well propose more efficient deployment scenarios and data management techniques, which might reduce this increase resources. massively...
CMS is tackling the exploitation of CPU resources at HPC centers where compute nodes do not have network connectivity to Internet. Pilot agents and payload jobs need interact with external services from nodes: access application software (CernVM-FS) conditions data (Frontier), management input output files (data services), job (HTCondor). Finding an alternative route these challenging. Seamless integration in production system without causing any operational overhead a key goal. The case...
There is a general trend in WLCG towards the federation of resources, aiming for increased simplicity, efficiency, flexibility, and availability. Although VO-agnostic resources between two independent autonomous resource centres may prove arduous, considerable amount flexibility sharing can be achieved context single VO, with relatively simple approach. We have demonstrated this PIC CIEMAT, Spanish Tier-1 Tier-2 sites CMS, by making use existing CMS xrootd infrastructure profiting from...
An overview of the data transfer, processing and analysis operations conducted at Spanish Tier-1 (PIC, Barcelona) Tier-2 (CIEMAT-Madrid IFCA-Santander federation) centres during past CMS CSA06 Computing, Software Analysis challenge in preparation for CSA07 is presented.
The second workshop on the HEP Analysis Ecosystem took place 23-25 May 2022 at IJCLab in Orsay, to look progress and continuing challenges scaling up analysis meet needs of HL-LHC DUNE, as well very pressing LHC Run 3 analysis. was themed around six particular topics, which were felt capture key questions, opportunities challenges. Each topic arranged a plenary session introduction, often with speakers summarising state-of-the art next steps for This then followed by parallel sessions, much...