A. Heiss

ORCID: 0000-0003-1661-9556
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • High-Energy Particle Collisions Research
  • Quantum Chromodynamics and Particle Interactions
  • Distributed and Parallel Computing Systems
  • Particle Detector Development and Performance
  • Advanced Data Storage Technologies
  • Scientific Computing and Data Management
  • Dark Matter and Cosmic Phenomena
  • Astrophysics and Cosmic Phenomena
  • Cosmology and Gravitation Theories
  • Parallel Computing and Optimization Techniques
  • Research Data Management Practices
  • Cloud Computing and Resource Management
  • Embedded Systems Design Techniques
  • Black Holes and Theoretical Physics
  • Neutrino Physics Research
  • Big Data Technologies and Applications
  • Computational Physics and Python Applications
  • Health, Environment, Cognitive Aging
  • Problem and Project Based Learning
  • Biomedical and Engineering Education
  • Face Recognition and Perception
  • Planetary Science and Exploration
  • Radiation Detection and Scintillator Technologies
  • Particle Accelerators and Free-Electron Lasers

Karlsruhe Institute of Technology
1999-2020

State Innovation Exchange
2014

FZI Research Center for Information Technology
2004-2005

Instituto de Física de Cantabria
2000-2004

Universidad de Cantabria
2000-2004

Children's Defense Fund
1998-2004

Academia Sinica
2000-2002

Istituto Nazionale di Fisica Nucleare
2000-2002

University of Bologna
2000-2002

Fermi Research Alliance
2002

Data from high-energy physics (HEP) experiments are collected with significant financial and human effort mostly unique. An inter-experimental study group on HEP data preservation long-term analysis was convened as a panel of the International Committee for Future Accelerators (ICFA). The formed by large collider-based investigated technical organisational aspects preservation. intermediate report released in November 2009 addressing general issues HEP. This paper includes extends report. It...

10.48550/arxiv.1205.4667 preprint EN other-oa arXiv (Cornell University) 2012-01-01

10.1007/s41781-019-0030-7 article DE Computing and Software for Big Science 2019-11-19

Modern large-scale astroparticle setups measure high-energy particles, gamma rays, neutrinos, radio waves, and the recently discovered gravitational waves. Ongoing future experiments are located worldwide. The data acquired have different formats, storage concepts, publication policies. Such differences a crucial point in era of Big Data multi-messenger analysis physics. We propose an open science web platform called ASTROPARTICLE.ONLINE which enables us to publish, store, search, select,...

10.3390/data3040056 article EN cc-by Data 2018-11-28

To satisfy future computing demands of the Worldwide LHC Computing Grid (WLCG), opportunistic usage third-party resources is a promising approach. While means to make such compatible with WLCG requirements are largely satisfied by virtual machines and containers technologies, strategies acquire disband many from providers still focus current research. Existing meta-schedulers that manage in hitting limits their design when tasked heterogeneous diverse resource providers. provide as part...

10.1051/epjconf/202024507040 article EN cc-by EPJ Web of Conferences 2020-01-01

Memorising and processing faces is a short-term memory dependent task of utmost importance in the security domain, which constant high performance must. Especially access or passport control-related tasks, timely identification decrements essential, margins error are narrow inadequate may have grave consequences. However, conventional tests frequently use abstract settings with little relevance to working situations. They thus be unable capture task-specific decrements. The aim study was...

10.1080/00140130802094371 article EN Ergonomics 2008-06-23

Increased operational effectiveness and the dynamic integration of only temporarily available compute resources (opportunistic resources) becomes more important in next decade, due to scarcity for future high energy physics experiments as well desired cloud performance computing resources. This results a heterogenous environment, which gives rise huge challenges operation teams experiments. At Karlsruhe Institute Technology (KIT) we design solutions tackle these challenges. In order ensure...

10.1051/epjconf/202024507038 article EN cc-by EPJ Web of Conferences 2020-01-01

Demand for computing resources in high energy physics (HEP) shows a highly dynamic behavior, while the provided by Worldwide LHC Computing Grid (WLCG) remains static. It has become evident that opportunistic such as High Performance (HPC) centers and commercial clouds are well suited to cover peak loads. However, utilization of these gives rise new levels complexity, e.g. need be managed dynamically HEP applications require very specific software environment usually not at resources....

10.1051/epjconf/201921408009 article EN cc-by EPJ Web of Conferences 2019-01-01

10.1016/j.nima.2004.07.075 article EN Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 2004-08-05

As results of the excellent LHC performance in 2016, more data than expected has been recorded leading to a higher demand for computing resources. It is already foreseeable that current and upcoming run periods flat budget technology advance will not be sufficient meet future requirements. This growing gap between supplied demanded

10.1088/1742-6596/1085/3/032056 article EN Journal of Physics Conference Series 2018-09-01

To overcome the computing challenge in High Energy Physics available resources must be utilized as efficiently possible. This targets algorithmic challenges workflows itself but also scheduling of jobs to compute resources. enable best possible scheduling, job schedulers require accurate information about resource consumption a before it is even executed. It responsibility user provide an estimate required for jobs. However, this quite users they (i) want ensure their run correctly, (ii)...

10.1051/epjconf/202024507039 article EN cc-by EPJ Web of Conferences 2020-01-01

The GridKa Tier 1 data and computing center hosts a significant share of WLCG processing resources. Providing these resources to all major LHC other VOs requires efficient, scalable reliable cluster management. To satisfy this, has recently migrated its batch from CREAM-CE PBS ARC-CE HTCondor. This contribution discusses the key highlights adoption this middleware at scale European center: As largest using plus HTCondor stack, is exemplary for migrating more than 20 000 cores over time span...

10.1051/epjconf/201921403053 article EN cc-by EPJ Web of Conferences 2019-01-01

Modern experimental astroparticle physics features large-scale setups measuring different messengers, namely high-energy particles generated by cosmic accelerators (e.g. supernova remnants, active galactic nuclei, etc): and gamma rays, neutrinos recently discovered gravitational waves. Ongoing future experiments are distributed over the Earth including ground, underground/underwater as well balloon payloads spacecrafts. The data acquired these have formats, storage concepts publication...

10.20944/preprints201810.0273.v1 preprint EN 2018-10-12

Current and future end-user analyses workflows in High Energy Physics demand the processing of growing amounts data. This plays a major role when looking at demands context High-Luminosity-LHC. In order to keep time turn-around cycles as low possible analysis clusters optimized with respect these can be used. Since hyper converged servers offer good combination compute power local storage, they form ideal basis for clusters. this contribution we report on setup commissioning dedicated...

10.1051/epjconf/202024507007 article EN cc-by EPJ Web of Conferences 2020-01-01

A data life cycle (DLC) is a high-level processing pipeline that involves acquisition, event reconstruction, analysis, publication, archiving, and sharing. For astroparticle physics DLC particularly important due to the geographical content diversity of research field. dedicated experiment spanning analysis centre would ensure multi-messenger analyses can be carried out using state-of-the-art methods. The German-Russian Astroparticle Data Life Cycle Initiative (GRADLCI) joint project...

10.22323/1.358.0284 article EN cc-by-nc-nd Proceedings of 36th International Cosmic Ray Conference — PoS(ICRC2019) 2019-07-22

Tape storage is still a cost effective way to keep large amounts of data over long period time and it expected that this will continue in the future. The GridKa tape environment complex system many hardware components software layers. Configuring for optimal performance all use cases non-trivial task requires lot experience. We present current status environment, report on recent upgrades improvements plans further develop enhance system, especially with regard future requirements HEP...

10.1051/epjconf/201921404029 article EN cc-by EPJ Web of Conferences 2019-01-01

Abstract The current experiments in high energy physics (HEP) have a huge data rate. To convert the measured data, an enormous number of computing resources is needed and will further increase with upgraded newer experiments. fulfill ever-growing demand allocation additional, potentially only temporary available non-HEP dedicated important. These so-called opportunistic cannot be used for analyses general but are also well-suited to cover typical unpredictable peak demands resources. For...

10.1088/1742-6596/1525/1/012067 article EN Journal of Physics Conference Series 2020-04-01

Data growth over several years within HEP experiments requires a wider use of storage systems for WLCG Tiered Centers. It also increases the complexity systems, which includes expansion hardware components and thereby complicates existing software products more. To cope with such is non-trivial task highly qualified specialists. Storing petabytes data on tape still most cost-effective way. Year after year, increases, consequently detailed study its optimal verification performance key aspect...

10.1051/epjconf/202024504026 article EN cc-by EPJ Web of Conferences 2020-01-01

GridKa School is an annual international computing school hosted by one of the largest scientific data centers in Europe. Adopting its computational resources and expert knowledge for educational purposes, provides excellent possibilities to gain skills on deployment application advanced software tools techniques. This paper describes unique concept school, model, continuous development curriculum successful organization structure. It argues benefits challenges chosen approach which has...

10.1109/educon.2015.7096003 article EN 2022 IEEE Global Engineering Education Conference (EDUCON) 2015-03-01

The GridKa computing center, hosted by Steinbuch Centre for Computing at the Karlsruhe Institute Technology (KIT) in Germany, is serving as largest Tier-1 center used ALICE collaboration LHC. In 2013, provides 30k HEPSPEC06, 2.7 PB of disk space, and 5.25 tape storage to ALICE. 10Gbit/s network connections from CERN, several centers general purpose are intensively. 2012 a total amount ~1 was transferred GridKa. As Grid framework, AliEn (ALICE Environment) being access resources, various...

10.1088/1742-6596/513/6/062038 article EN Journal of Physics Conference Series 2014-06-11

Several scientific fields, including Astrophysics, Astroparticle Physics, Cosmology, Nuclear and Particle Research with Photons, are estimating that by the 2020 decade they will require data handling systems volumes approaching Zettabyte distributed amongst as many 1018 individually addressable objects (Zettabyte-Exascale systems). It may be convenient or necessary to deploy such using multiple physical sites. This paper describes findings of a working group composed experts from several

10.1088/1742-6596/664/4/042009 article EN Journal of Physics Conference Series 2015-12-23

10.1016/j.nima.2005.11.106 article EN Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 2005-12-10
Coming Soon ...