X. Espinal Curull
- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Particle Detector Development and Performance
- Quantum Chromodynamics and Particle Interactions
- Dark Matter and Cosmic Phenomena
- Distributed and Parallel Computing Systems
- Neutrino Physics Research
- Advanced Data Storage Technologies
- Cosmology and Gravitation Theories
- Computational Physics and Python Applications
- Astrophysics and Cosmic Phenomena
- Scientific Computing and Data Management
- Black Holes and Theoretical Physics
- advanced mathematical theories
- Muon and positron interactions and applications
- Medical Imaging Techniques and Applications
- Radiation Detection and Scintillator Technologies
- Parallel Computing and Optimization Techniques
- Big Data Technologies and Applications
- Peer-to-Peer Network Technologies
- Superconducting Materials and Applications
- Particle Accelerators and Free-Electron Lasers
- Distributed systems and fault tolerance
- Network Traffic and Congestion Control
- Nuclear reactor physics and engineering
European Organization for Nuclear Research
2011-2024
European Council
2023
Universitat Autònoma de Barcelona
2008-2020
Kurchatov Institute
2019
Institute for High Energy Physics
2007-2017
Port d'Informació Científica
2008-2017
Institució Catalana de Recerca i Estudis Avançats
2011-2014
Uppsala University
2014
The University of Adelaide
2013
University of Birmingham
2013
We present measurements of nu_mu disappearance in K2K, the KEK to Kamioka long-baseline neutrino oscillation experiment. One hundred and twelve beam-originated events are observed fiducial volume Super-Kamiokande with an expectation 158.1^{+9.2}_{-8.6} without oscillation. A distortion energy spectrum is also seen 58 single-ring muon-like reconstructed energies. The probability that observations explained by for no 0.0015% (4.3sigma). In a two flavor scenario, allowed Delta m^2 region at...
We present results for ${\ensuremath{\nu}}_{\ensuremath{\mu}}$ oscillation in the KEK to Kamioka (K2K) long-baseline neutrino experiment. K2K uses an accelerator-produced beam with a mean energy of 1.3 GeV directed at Super-Kamiokande detector. observed energy-dependent disappearance ${\ensuremath{\nu}}_{\ensuremath{\mu}}$, which we presume have oscillated ${\ensuremath{\nu}}_{\ensuremath{\tau}}$. The probability that would observe these if there is no 0.0050% ($4.0\ensuremath{\sigma}$).
The weak nucleon axial-vector form factor for quasi-elastic interactions is determined using neutrino interaction data from the K2K Scintillating Fiber detector in beam at KEK. More than 12,000 events are analyzed, of which half charged-current nu-mu n to mu- p occurring primarily oxygen nuclei. We use a relativistic Fermi gas model and assume approximately dipole with one parameter, axial vector mass M_A, fit shape distribution square momentum transfer nucleus. Our best result M_A = 1.20...
We report the result from a search for charged-current coherent pion production induced by muon neutrinos with mean energy of 1.3 GeV. The data are collected fully active scintillator detector in K2K long-baseline neutrino oscillation experiment. No evidence is observed, and an upper limit set on cross section ratio to total interaction at 90% confidence level. This first experimental charged region few
We performed an improved search for ${\ensuremath{\nu}}_{\ensuremath{\mu}}\ensuremath{\rightarrow}{\ensuremath{\nu}}_{e}$ oscillation with the KEK to Kamioka (K2K) long-baseline neutrino experiment, using full data sample of $9.2\ifmmode\times\else\texttimes\fi{}{10}^{19}$ protons on target. No evidence a ${\ensuremath{\nu}}_{e}$ appearance signal was found, and we set bounds parameters. At $\ensuremath{\Delta}{m}^{2}=2.8\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}3}\text{ }\text{...
Single charged pion production in charged-current muon neutrino interactions with carbon is studied using data collected the K2K long-baseline experiment. The mean energy of incident neutrinos 1.3 GeV. used this analysis are mainly from a fully active scintillator detector, SciBar. cross section for single π+ resonance region (W<2 GeV/c2) relative to quasielastic found be 0.734−0.153+0.140. energy-dependent ratio also measured. results consistent previous experiment and prediction our...
The computing strategy document for HL-LHC identifies storage as one of the main WLCG challenges in decade from now. In naive assumption applying today's model, ATLAS and CMS experiments will need order magnitude more resources than what could be realistically provided by funding agencies at same cost today. evolution facilities way organized consolidated play a key role how this possible shortage addressed. contribution we describe architecture data lake, intended service geographically...
The nucleon weak axial‐vector form factor for quasielastic interactions is determined from the analysis of neutrino‐Carbon in K2K detector fully active SciBar tracking calorimeter. consists on a fit to four‐momentum squared, Q2 = −q2 (Pμ−Pν)2, quasi elastic reaction νμn→μ−p, induced by neutrino Carbon nuclei. can be approximated dipole with mass as free parameter, best MA 1.144±0.077(fit)−0.072+0.078(syst).
In this paper, we report on the measurement of rate inclusive ${\ensuremath{\pi}}^{0}$ production induced by charged-current neutrino interactions in a ${\mathrm{C}}_{8}{\mathrm{H}}_{8}$ target at mean energy 1.3 GeV K2K near detector. Out sample 11 606 interactions, select 479 events with two reconstructed photons. We find that cross section for relative to quasielastic is...
The CERN-IT Data Storage and Services (DSS) group stores provides access to data coming from the LHC other physics experiments. We implement specialised storage services provide tools for optimal management, based on evolution of volumes, available technologies observed experiment users' usage patterns. Our current solutions are CASTOR, highly-reliable tape-backed heavy-duty Tier-0 workflows, EOS, disk-only full-scale analysis activities. CASTOR is evolving towards a simplified disk layer in...
HL-LHC will confront the WLCG community with enormous data storage, management and access challenges. These are as much technical economical. In WLCG-DOMA Access working group, members of experiments site managers have explored different models for storage strategies to reduce cost complexity, taking into account boundary conditions given by our community.Several these scenarios been evaluated quantitatively, such Data Lake model incremental improvements current computing respect resource...
The European-funded ESCAPE project will prototype a shared solution to computing challenges in the context of European Open Science Cloud. It targets Astronomy and Particle Physics facilities research infrastructures focuses on developing solutions for handling Exabyte scale datasets. DIOS work package aims at delivering Data Infrastructure Science. Such an infrastructure would be non HEP specific implementation data lake concept elaborated HSF Community White Paper endorsed WLCG Strategy...
CERN IT DSS operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR EOS. The total usable space available on disk users is about 100 PB (with relative ratios 1:20:120). EOS actively uses two Tier0 centres (Meyrin Wigner) with 50:50 ratio. also provide sizeable on-demand services most notably OpenStack NFS-based clients: this provided by a Ceph infrastructure (3 PB) few proprietary servers (NetApp). We will describe our operational...
EOS is an open source distributed filesystem developed and used mainly at CERN. It provides low latency, high availability, strong authentication, multiple replication schemas as well access protocols features. Deployment operations remain simple currently being by experiments CERN providing a total raw storage space of 86PB. A brief overview EOS's architecture given then its main latest features are reviewed some facts reported. Finally, emphasis laid on the new infrastructure aware file...
The atmospheric neutrino background for proton decay via p→e+π0 in ring imaging water Cherenkov detectors is studied with an artificial accelerator beam the first time. In total, 3.14×105 events corresponding to about 10 megaton-years of interactions were collected by a 1000 ton detector (KT). KT charged-current single π0 production data are well reproduced simulation programs and secondary hadronic used Super-Kamiokande (SK) search. obtained rate SK from neutrinos whose energies below 3 GeV...
The Virtual Research Environment is an analysis platform developed at CERN serving the needs of scientific communities involved in European Projects. Its scope to facilitate development end-to-end physics workflows, providing researchers with access infrastructure and digital content necessary produce preserve a result compliance FAIR principles. platform's aimed demonstrating how sciences spanning from High Energy Physics Astrophysics could benefit usage common technologies, initially born...
A new near detector for the K2K long baseline neutrino experiment, SciBar, was constructed and started data taking to study interactions. In K2K, oscillation is studied by comparing number of interactions energy spectrum between far detectors. order oscillations more precisely, it necessary improve measurement below 1 GeV, where latest results suggest maximum oscillation. For that purpose, SciBar designed be fully active with fine segmentation. We present design basic performance. All...
Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services novel solutions. Unprecedented volumes of data coming from the broad number experiments at CERN need to be quickly available in a highly scalable way for large-scale processing distribution while parallel they are routed tape long-term archival. These activities critical success HEP experiments. Nowadays we operate high incoming throughput (14GB/s during 2015 LHC Pb-Pb run 11PB...
The automation of operations is essential to reduce manpower costs and improve the reliability system. Site Status Board (SSB) a framework which allows Virtual Organizations monitor their computing activities at distributed sites evaluate site performance. ATLAS experiment intensively uses SSB for shifts, estimating data processing transfer efficiencies particular site, implementing automatic exclusion from activities, in case potential problems. provides real-time aggregated monitoring view...
This contribution describes the evolution of main CERN storage system, CASTOR, as it manages bulk data stream LHC and other experiments, achieving over 90 PB stored by end Run 1. was marked introduction policies to optimize tape sub-system throughput, going towards a cold system where placement is managed experiments' production managers. More efficient migrations recalls have been implemented deployed meta-data operations greatly reduce overhead due small files. A repack facility now...
In the past, access to remote storage was considered be at least one order of magnitude slower than local disk access. Improvement on network technologies provide alternative using disk. For those accesses can today reach levels throughput similar or exceeding disks. Common choices as protocols in WLCG collaboration are RFIO, [GSI]DCAP, GRIDFTP, XROOTD and NFS. HTTP protocol shows a promising it is simple, lightweight protocol. It also enables use standard such http caching load balancing...
The 26 th International Conference on Computing in High Energy and Nuclear Physics (CHEP), organized by Jefferson Lab, took place Norfolk, Virginia from 5–11 May 2023. conference attracted 581 registered participants 28 different countries. There were scientific presentations made over the 5 days of conference. These divided between 20 long talks 2 keynotes, which presented plenary sessions; 450+ short talks, parallel 140+ posters split two dedicated sessions.
In order to accommodate growing demand for storage and computing capacity from the LHC experiments, in 2012 CERN tendered a remote computer centre. The potential negative performance implications of geographical distance (aka network latency) within same "site" service on physics have been investigated. Overall impact should be acceptable, but some access patterns might significant. Recent EOS changes help mitigate effects, experiments may need adjust their job parameters.
The High Luminosity phase of the LHC, which aims for a tenfold increase in luminosity proton-proton collisions is expected to start operation eight years. An unprecedented scientific data volume at multiexabyte scale will be delivered particle physics experiments CERN. This amount has stored and corresponding technology must ensure fast reliable delivery processing by community all over world. present LHC computing model not able provide required infrastructure growth even taking into...