- Distributed and Parallel Computing Systems
- Particle physics theoretical and experimental studies
- Particle Detector Development and Performance
- Advanced Data Storage Technologies
- Scientific Computing and Data Management
- Parallel Computing and Optimization Techniques
- Computational Physics and Python Applications
- Particle Accelerators and Free-Electron Lasers
- Distributed systems and fault tolerance
- Neutrino Physics Research
- High-Energy Particle Collisions Research
- Quantum Chromodynamics and Particle Interactions
- Atomic and Subatomic Physics Research
- Peer-to-Peer Network Technologies
- Integrated Energy Systems Optimization
- Superconducting Materials and Applications
- Astrophysics and Cosmic Phenomena
- Cloud Computing and Resource Management
- Cloud Computing and Remote Desktop Technologies
- Data Mining Algorithms and Applications
- Opportunistic and Delay-Tolerant Networks
- Lexicography and Language Studies
- Caching and Content Delivery
- Astro and Planetary Science
- Botanical Research and Chemistry
European Organization for Nuclear Research
2012-2024
Universidade Nova de Lisboa
2004
Machine learning is an important applied research area in particle physics, beginning with applications to high-level physics analysis the 1990s and 2000s, followed by explosion of event identification reconstruction 2010s. In this document we discuss promising future development areas machine a roadmap for their implementation, software hardware resource requirements, collaborative initiatives data science community, academia industry, training community science. The main objective connect...
CernVM is a Virtual Software Appliance capable of running physics applications from the LHC experiments at CERN. It aims to provide complete and portable environment for developing data analysis on any end-user computer (laptop, desktop) as well Grid, independently Operating System platforms (Linux, Windows, MacOS). The experiment application software its specific dependencies are built delivered appliance just in time by means File (CVMFS) specifically designed efficient distribution....
The detector description is an essential component that used to analyze data resulting from particle collisions in high energy physics experiments. We will present a generic toolkit and describe the guiding requirements architectural design for such toolkit, as well main implementation choices. strongly driven by easy of use; developers descriptions applications using them should provide minimal information specific code achieve desired result. be built reusing already existing components...
Abstract Research in high energy physics (HEP) requires huge amounts of computing and storage, putting strong constraints on the code speed resource usage. To meet these requirements, a compiled high-performance language is typically used; while for physicists, who focus application when developing code, better research productivity pleads high-level programming language. A popular approach consists combining Python, used interface, C++, intensive part code. more convenient efficient would...
Machine learning has been applied to several problems in particle physics research, beginning with applications high-level analysis the 1990s and 2000s, followed by an explosion of event identification reconstruction 2010s. In this document we discuss promising future research development areas for machine physics. We detail a roadmap their implementation, software hardware resource requirements, collaborative initiatives data science community, academia industry, training community science....
After ten years from its first version, the Gaudi software framework underwent many changes and improvements with a subsequent increase of code base. Those were almost always introduced preserving backward compatibility reducing as much possible in itself; obsolete has been removed only rarely. release targeted to data taking 2008, it decided have review aim general consolidation view 2009. We also take occasion introduce those never implemented because big impact they on rest code, needed...
Using virtualization technology, the entire application environment of an LHC experiment, including its Linux operating system and experiment's code, libraries support utilities, can be incorporated into a virtual image executed under suitable hypervisors installed on choice target host platforms.
CernVM[1] is a Virtual Software Appliance capable of running physics applications from the LHC experiments at CERN.Such virtual appliance aims to provide complete and portable environment for developing data analysis on any end-user computer (laptop, desktop) Grid, independently Operating System software hardware platform (Linux, Windows, MacOS).The goal remove need installation experiment minimize number platforms (compiler-OS combinations) which needs be supported tested, thus reducing...
Computing for the LHC, and HEP more generally, is traditionally viewed as requiring specialized infrastructure software environments, therefore not compatible with recent trend in "volunteer computing", where volunteers supply free processing time on ordinary PCs laptops via standard Internet connections. In this paper, we demonstrate that use of virtual machine technology, at least some LHC computing tasks can be tackled volunteer resources. Specifically, by presenting resources to...
PODIO is a C++ library that supports the automatic creation of event data models (EDMs) and efficient I/O code for HEP experiments. It developed as new EDM Toolkit future particle physics experiments in context AIDA2020 EU programme. Experience from LHC linear collider community shows existing solutions partly suffer overly complex with deep object-hierarchies or unfavorable performance. The project was created order to address these problems. based on idea employing plain-old-data (POD)...
In the past, increasing demands for HEP processing resources could be fulfilled by ever clock-frequencies and distributing work to more physical machines. Limitations in power consumption of both CPUs entire data centres are bringing an end this era easy scalability. To get most CPU performance per watt, future hardware will characterised less memory processor, as well thinner, specialized numerous cores die, rather heterogeneous resources. fully exploit potential many cores, frameworks need...
Future HEP experiments require detailed simulation and advanced reconstruction algorithms to explore the physics reach of their proposed machines design, optimise, study detector geometry performance. To synergize development CLIC FCC software efforts, CERN EP R&D roadmap proposes creation a “Turnkey Software Stack”, which is foreseen provide all necessary ingredients, from analysis, for future experiments; not only FCC, but also Super-tau-charm factories, CEPC, ILC. The stack will...
Software engineering is undergoing a paradigm shift in order to accommodate new CPU architectures with many cores, which concurrency will play more fundamental role programming languages and libraries. Development of models specialized software frameworks needed assist LHC scientists developing their algorithms applications that allow for maximally parallel execution. In this paper we present our current ideas evolving the use by experiments support decomposition data processing each event...
Full detector simulation was among the largest CPU consumer in all CERN experiment software stacks for first two runs of Large Hadron Collider (LHC). In early 2010's, projections were that demands would scale linearly with luminosity increase, compensated only partially by an increase computing resources. The extension fast approaches to more use cases, covering a larger fraction budget, is part solution due intrinsic precision limitations. remainder corresponds speeding-up several factors,...
The LHCb experiment has been using the CMT build and configuration tool for its software since first versions, mainly because of multi-platform support powerful management functionality. Still, some limitations in terms performance increased complexity added to cope with new use cases recently. Therefore, we have looking a viable alternative investigated possibility adopting CMake tool, which does very good job building is getting popular HEP community. result this study CMake-based...
HEP experiments produce enormous data sets at an ever-growing rate. To cope with the challenge posed by these sets, experiments' software needs to embrace all capabilities modern CPUs offer. With decreasing memory/core ratio, one-process-per-core approach of recent years becomes less feasible. Instead, multi-threading fine-grained parallelism be exploited benefit from memory sharing among threads.
The necessity for thread-safe experiment software has recently become very evident, largely driven by the evolution of CPU architectures towards exploiting increasing levels parallelism. For high-energy physics this represents a real paradigm shift, as concurrent programming was previously only limited to special, well-defined domains like control or framework internals. This however, falls into middle successful LHC programme and many million lines code have already been written without...
Python is a flexible, powerful, high-level language with excellent interactive and introspective capabilities very clean syntax. As such, it can be effective tool for driving physics analysis. designed to extensible in low-level C-like languages, its use as scientific steering has become quite widespread. To this end, existing custom-written C or C++ libraries are bound the environment so-called extension modules. A number of tools easing process creating such bindings exist, SWIG Boost....
Common and community software packages, such as ROOT, Geant4 event generators have been a key part of the LHC's success so far continued development optimisation will be critical in future. The challenges are driven by an ambitious physics programme, notably LHC accelerator upgrade to high-luminosity, HL-LHC, corresponding detector upgrades ATLAS CMS. In this document we address issues for that is used multiple experiments (usually even more widely than CMS) maintained teams developers who...
The Gaudi/Athena and Grid Alliance (GANGA) is a front-end for the configuration, submission, monitoring, bookkeeping, output collection, reporting of computing jobs run on local batch system or grid. In particular, GANGA handles that use applications written Gaudi software framework shared by Atlas LHCb experiments. exploits commonality Gaudi-based jobs, while insulating against grid-, batch- framework-specific technicalities, to maximize end-user productivity in defining, configuring,...