- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Quantum Chromodynamics and Particle Interactions
- Particle Detector Development and Performance
- Dark Matter and Cosmic Phenomena
- Cosmology and Gravitation Theories
- Computational Physics and Python Applications
- Neutrino Physics Research
- Distributed and Parallel Computing Systems
- Advanced Data Storage Technologies
- Astrophysics and Cosmic Phenomena
- Parallel Computing and Optimization Techniques
- Black Holes and Theoretical Physics
- Medical Imaging Techniques and Applications
- Particle Accelerators and Free-Electron Lasers
- Scientific Computing and Data Management
- Radiation Detection and Scintillator Technologies
- Interconnection Networks and Systems
- CCD and CMOS Imaging Sensors
- Superconducting Materials and Applications
- Gamma-ray bursts and supernovae
- Nuclear reactor physics and engineering
- Atomic and Subatomic Physics Research
- Radiation Therapy and Dosimetry
- Distributed systems and fault tolerance
European Organization for Nuclear Research
2016-2025
Institute of High Energy Physics
2015-2024
Istituto Nazionale di Fisica Nucleare, Sezione di Firenze
2014-2024
University of Antwerp
2024
A. Alikhanyan National Laboratory
2022-2024
Paul Scherrer Institute
2014-2023
Fermi Research Alliance
2019
University of California, Los Angeles
2019
Massachusetts Institute of Technology
2019
National Technical University of Athens
2019
The discovery by the ATLAS and CMS experiments of a new boson with mass around 125 GeV measured properties compatible those Standard-Model Higgs boson, coupled absence discoveries phenomena beyond Standard Model at TeV scale, has triggered interest in ideas for future factories. A circular e+e- collider hosted 80 to 100 km tunnel, TLEP, is among most attractive solutions proposed so far. It clean experimental environment, produces high luminosity top-quark, W Z studies, accommodates multiple...
Abstract For the Phase-2 upgrade of CMS experiment, central DAQ group designed and developed two custom ATCA boards. These boards provide interfaces between sub-detector electronics systems. This paper describes our experience with chosen prototyping strategy, a focus on design modification choices made along way. It concludes brief overview recent firmware developments, look at transition towards full board production.
The CMS data acquisition system is made of two major subsystems: event building and filter. presented paper describes the architecture design software that processes flow in currently operating experiment. central DAQ relies on industry standard networks processing equipment. Adopting a single infrastructure all subsystems experiment imposes, however, number different requirements. High efficiency configuration flexibility are among most important ones. XDAQ has matured over an eight years...
The Compact Muon Solenoid (CMS) experiment operating at the CERN (European Laboratory for Nuclear Physics) Large Hadron Collider (LHC) is in process of upgrading several its detector systems. Adding more individual components brings need to test and commission those separately from existing ones so as not compromise physics data-taking. CMS Trigger, Timing Control (TTC) system had reached limits terms number separate elements (partitions) that could be supported. A new Distribution System...
The data-acquisition system of the CMS experiment at LHC performs read-out and assembly events accepted by first level hardware trigger. Assembled are made available to high-level trigger which selects interesting for offline storage analysis. is designed handle a maximum input rate 100 kHz an aggregated throughput 100GB/s originating from approximately 500 sources. An overview architecture design software DAQ given. We discuss performance operational experience months physics data taking.
For the upgrade of DAQ CMS experiment in 2013/2014 an interface between custom detector Front End Drivers (FEDs) and new eventbuilder network has to be designed. a loss-less data collection from more then 600 FEDs FPGA based card implementing TCP/IP protocol suite over 10Gbps Ethernet been developed. We present hardware challenges modifications made TCP order simplify its implementation together with set performance measurements which were carried out current prototype.
The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 will undergo a major upgrade to address obsolescence current hardware requirements posed by LHC accelerator various components. For loss-less collection FEDs new FPGA based card implementing TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit TCP implementation complexity group developed simplified unidirectional but RFC 793...
The data acquisition (DAQ) system of the CMS experiment at CERN Large Hadron Collider assembles events a rate 100 kHz, transporting event an aggregate throughput GB/s to high level trigger (HLT) farm. HLT farm selects interesting for storage and offline analysis around 1 kHz. DAQ has been redesigned during accelerator shutdown in 2013/14. motivation is twofold: Firstly, current compute nodes, networking, infrastructure will have reached end their lifetime by time LHC restarts. Secondly,...
Abstract This paper describes recent progress on the design of DAQ and Timing Hub, or DTH, an ATCA (Advanced Telecommunications Computing Architecture) hub board intended for phase-2 upgrade CMS experiment. Prototyping was originally divided into multiple feature lines, spanning all different aspects DTH functionality. The second prototype merges R&D prototyping lines a single board, which is to be production candidate. Emphasis process experience in going from first prototype, included...
The CMS data acquisition (DAQ) is implemented as a service-oriented architecture where DAQ applications, well general applications such monitoring and error reporting, are run self-contained services. task of deployment operation services achieved by using several heterogeneous facilities, custom configuration scripts in languages. In this work, we restructure the existing system into homogeneous, scalable cloud adopting uniform paradigm, all orchestrated environment with standardized...
Abstract A novel Data Acquisition (DAQ) system, known as Level-1 Scouting (L1DS), is being introduced part of the (L1) trigger CMS experiment. The L1DS system will receive L1 intermediate primitives from Phase-2 on DAQ-800 custom boards, designed for central DAQ. Firmware developed this purpose Xilinx VCU128 board, with features similar to one half DAQ-800, and validated in a demonstrator LHC Run-3. This contribution describes firmware development view target design DAQ-800.
The CMS data acquisition system is designed to build and filter events originating from 476 detector sources at a maximum trigger rate of 100 kHz. Different architectures switch technologies have been evaluated accomplish this purpose. Events will be built in two stages: the first stage set event builders called front-end driver (FED) builders. These based on Myrinet technology pre-assemble groups about eight sources. second readout perform building full events. A single builder 60 16 kB...
Summary form only given. The data acquisition system (DAQ) of the CMS experiment at CERN Large Hadron Collider assembles events a rate 100 kHz, transporting event an aggregate throughput GB/s to high level trigger (HLT) farm. HLT farm selects interesting for storage and offline analysis around 1 kHz. DAQ has been redesigned during accelerator shutdown in 2013/14. motivation is twofold: Firstly, current compute nodes, networking, infrastructure will have reached end their lifetime by time LHC...
The CMS data acquisition system comprises O(20000) interdependent services that need to be monitored in near real-time. ability monitor a large number of distributed applications accurately and effectively is paramount importance for robust operations. Application monitoring entails the collection simple composed values made available by software components hardware devices. A key aspect detection deviations from specified behaviour supported timely manner, which prerequisite order take...
In the last three decades, HEP experiments have faced challenge of manipulating larger and masses data from increasingly complex, heterogeneous detectors with millions then tens electronic channels. LHC abandoned monolithic architectures nineties in favor a distributed approach, leveraging appearence high speed switched networks developed for digital telecommunication internet, corresponding increase memory bandwidth available off-the-shelf consumer equipment. This led to generation where...
The CMS event builder assembles events accepted by the first level trigger and makes them available to high-level trigger. needs handle a maximum input rate of 100 kHz an aggregated throughput GB/s originating from approximately 500 sources. This paper presents chosen hardware software architecture. system consists 2 stages: initial pre-assembly reducing number fragments one order magnitude final assembly several independent readout (RU-builder) slices. RU-builder is based on 3 separate...
The CMS experiment at the LHC features a two-level trigger system. Events accepted by first level trigger, maximum rate of 100 kHz, are read out Data Acquisition system (DAQ), and subsequently assembled in memory farm computers running software high-level (HLT), which selects interesting events for offline storage analysis order few hundred Hz. HLT algorithms consist sequences offline-style reconstruction filtering modules, executed on 0(10000) CPU cores built from commodity hardware....
The DAQ and Timing Hub is an ATCA hub board designed for the Phase-2 upgrade of CMS experiment. In addition to providing high-speed Ethernet connectivity all back-end boards, it forms bridge between sub-detector electronics central DAQ, timing, trigger control systems. One important requirement distribution several high-precision, phase-stable, LHC synchronous clock signals use by timing detectors. current paper presents first measurements performed on initial prototype, with a focus...
The High Luminosity LHC (HL-LHC) will start operating in 2027 after the third Long Shutdown (LS3), and is designed to provide an ultimate instantaneous luminosity of 7:5 × 10 34 cm −2 s −1 , at price extreme pileup up 200 interactions per crossing. number overlapping HL-LHC collisions, their density, resulting intense radiation environment, warrant almost complete upgrade CMS detector. upgraded detector be read out by approximately fifty thousand highspeed front-end optical links...