- Particle Detector Development and Performance
- Advanced Data Storage Technologies
- Particle physics theoretical and experimental studies
- Distributed and Parallel Computing Systems
- Neutrino Physics Research
- Astrophysics and Cosmic Phenomena
- Advanced Optical Sensing Technologies
- Parallel Computing and Optimization Techniques
- Scientific Computing and Data Management
- Dark Matter and Cosmic Phenomena
- Radiation Detection and Scintillator Technologies
- Atomic and Subatomic Physics Research
- Particle Accelerators and Free-Electron Lasers
- High-Energy Particle Collisions Research
- Distributed systems and fault tolerance
- Caching and Content Delivery
- Business Process Modeling and Analysis
- Quantum, superfluid, helium dynamics
- Robotics and Automated Systems
- Nuclear Physics and Applications
- Sensor Technology and Measurement Systems
- Multi-Agent Systems and Negotiation
- Neural Networks and Applications
- Semantic Web and Ontologies
- Advanced NMR Techniques and Applications
European Organization for Nuclear Research
2015-2024
Eötvös Loránd University
2015-2018
Momentum Research
2017
Center for Migration Studies of New York
2015
Abstract FASER, the ForwArd Search ExpeRiment, is an experiment dedicated to searching for light, extremely weakly-interacting particles at CERN's Large Hadron Collider (LHC). Such may be produced in very forward direction of LHC's high-energy collisions and then decay visible inside FASER detector, which placed 480 m downstream ATLAS interaction point, aligned with beam axis. also includes a sub-detector, ν , designed detect neutrinos LHC study their properties. In this paper, each...
DUNE will be the world's largest neutrino experiment due to take data in 2025. Here is described acquisition (DAQ) system for one of its prototypes - ProtoDUNE-SP Q4 2018. also breaks records as beam test yet constructed, and a fundamental element CERN's Neutrino Platform. This renders an own right design construction have been chosen meet this scale. Due aggressive timescale, off-the-shelf electronics demands where possible. The cryostat comprises two primary subdetectors single phase...
FASER, the ForwArd Search ExpeRiment, is an experiment dedicated to searching for light, extremely weakly-interacting particles at CERN's Large Hadron Collider (LHC). Such may be produced in very forward direction of LHC's high-energy collisions and then decay visible inside FASER detector, which placed 480 m downstream ATLAS interaction point, aligned with beam axis. also includes a sub-detector, FASER$\nu$, designed detect neutrinos LHC study their properties. In this paper, each component...
The Condition Database plays a key role in the CMS computing infrastructure. complexity of detector and variety sub-systems involved are setting tight requirements for handling Conditions. In last two years collaboration has put substantial effort re-design system, with aim at improving scalability operability data taking starting 2015. focused on simplifying architecture, using lessons learned during operation Run I data-taking period (20092013). new system relational features database...
Large liquid argon Time Projection Chambers have been adopted for the DUNE experiment's far detector, which will be composed of four 17 kton detectors situated 1.5 km underground at Sanford Underground Research Facility. This represents a large increase in scale compared to existing experiments. Both single- and dual-phase technologies validated CERN, cryostats capable accommodating full-size detector modules, exposed low-energy charged particle beams. programme, called ProtoDUNE, also...
The CMS experiment at the Large Hadron Collider (LHC) CERN, Geneva, Switzerland, is made of many detectors which in total sum up to more than 75 million channels. detector monitoring information all channels (temperatures, voltages, etc.), quality, beam conditions, and other data crucial for reconstruction analysis experiment's recorded collision events stored an online database. A subset that information, "conditions data", copied out another database from where it used offline processing,...
Since 2014 the ATLAS and CMS experiments share a common vision on database infrastructure for handling of non-event data in forthcoming LHC runs. The wide commonality use cases has allowed to agree overall design solution that is meeting requirements both experiments. A first prototype been completed 2016 made available based web service implementing REST api with set functions management conditions data. In this contribution, we describe architecture tests have performed within computing...
The Data AcQuisition (DAQ) software for most applications in high energy physics is composed of common building blocks, such as a networking layer, plug-in loading, configuration, and process management. These are often re-invented developed from scratch each project or experiment around specific needs. In some cases, time available resources can be limited make development requirements difficult impossible to meet. Moved by these premises, our team an open-source lightweight C++ framework...
Conditions data infrastructure for both ATLAS and CMS have to deal with the management of several terabytes [1, 2]. Distributed computing access this requires particular care attention manage request-rates up tens kHz. Thanks large overlap in use cases requirements, worked towards a common solution conditions aim using design data-taking Run 3. In meantime other experiments, including NA62, expressed an interest cross-experiment initiative. For experiments smaller payload volume complexity,...
With the restart of LHC in 2015, growth CMS Conditions dataset will continue, therefore need consistent and highly available access to makes a great cause revisit different aspects current data storage solutions.
Emerging high-performance storage technologies are opening up the possibility of designing new distributed data acquisition (DAQ) system architectures, in which live and their processing decoupled through a element. An example these is 3D XPoint, promises to fill gap between memory traditional offers unprecedented high throughput for nonvolatile data. In this article, we characterize performance persistent devices that use XPoint technology, context DAQ one large Particle Physics experiment,...
The liquid argon Time Projection Chamber technique has matured and is now in use by several short-baseline neutrino experiments. This technology will be used the long-baseline DUNE experiment; however, this experiment represents a large increase scale, for which needs to validated explicitly. To end, both single-phase dual-phase implementations of are being tested at CERN two full-scale (10 × 10 m 3 ) ProtoDUNE setups. Besides detector technology, these setups also allow extensive tests...
The DAQ system of ProtoDUNE-SP successfully proved its design principles and met the requirements beam run 2018. technical for DUNE experiment has major differences compared to prototype due different placed on detector, as well a radically location operation. single-phase at CERN is integration facility R&D aspects system. allows exploration additional data processing capabilities optimization FELIX system, which chosen TPC readout solution detectors. One fundamental from that relies...
The DUNE detector is a neutrino physics experiment that expected to take data starting from 2028. acquisition (DAQ) system of the designed sustain several TB/s incoming which will be temporarily buffered while being processed by software based selection system. In DUNE, some rare processes (e.g. Supernovae Burst events) require storing full complement produced over 1-2 minute window. These are recognised fires specific trigger decision. Upon reception this decision moved temporary buffers...
References [1] CMS experience with online and offline databases A. Pfeiffer et al., J.Phys.Conf.Ser. 396 (2012) 052059 2014 Introduction (Compact Muon Solenoid) is an experiment at the LHC CERN, goal to answer most fundamental questions about universe, essential help of several fields computer science. The Physics, Software & Computing Group, Operations section (CMG-CO) plays a main role in development maintance complex infrastructure for managing detector’s Alignment Calibration constants...
The GAIA[1] methodology deals with the macro and micro-level analysis design of multiagent-agent systems, that focuses on computational organisation between interacting roles.I will illustrate a case study for unique system is based this methodology.NEXT-TELL[2] an Integrated Project main objective to provide methodological support teachers students, in order bring visions future into todays' classrooms.The different stages theoretical considerations project can be transparently modeled by...
With the restart of LHC in 2015, growth CMS conditions dataset will continue, therefore need a consistent and highly available access to makes great cause revisit different aspects current data storage solutions. We present study alternative backends for Conditions Databases, evaluating some most popular NoSQL ones support key-value representation conditions. In addition baseline performance comparison between document store, column-oriented, plain layer these databases software was...
Abstract The performance of I/O intensive applications is largely determined by the organization data and associated insertion/extraction techniques. In this paper we present design implementation an application that targeted at managing received (upto ~ 150 Gb/s payload throughput) into host DRAM, buffering for several seconds, matched with DRAM size, before being dropped. All are validated, processed indexed. features extracted from processing streamed out to subscribers over network; in...
The CMS experiment at CERN LHC has a dedicated infrastructure to handle the alignment and calibration data. This is composed of several services, which take on various data management tasks required for consumption non-event (also called as condition data) in activities. criticality these imposes tights requirements availability reliability services executing them. In this scope, comprehensive monitoring alarm generating system been developed. implemented based Nagios open source industry...
Emerging high-performance storage technologies are opening up the possibility of designing new distributed data acquisition system architectures, in which live and their processing decoupled through a element. An example these is 3DXPoint, promises to fill gap between memory traditional offers unprecedented high throughput for persistency. In this paper, we characterize performance persistent devices, use 3DXPoint technology, context one large Particle Physics experiment, DUNE. This...