- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Quantum Chromodynamics and Particle Interactions
- Particle Detector Development and Performance
- Advanced Memory and Neural Computing
- Computational Physics and Python Applications
- Machine Learning in Materials Science
- Ferroelectric and Negative Capacitance Devices
- Dark Matter and Cosmic Phenomena
- Cosmology and Gravitation Theories
- Advanced Electron Microscopy Techniques and Applications
- Neutrino Physics Research
- Neural dynamics and brain function
- Advanced Neural Network Applications
- Radiation Detection and Scintillator Technologies
- Stochastic Gradient Optimization Techniques
- Distributed and Parallel Computing Systems
- Photoreceptor and optogenetics research
- Black Holes and Theoretical Physics
- Parallel Computing and Optimization Techniques
Graphcore (United Kingdom)
2021-2024
University of Bristol
2018-2023
Abstract Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms energy consumption and latency when performing inference with deep learning workloads. Error backpropagation is presently regarded as the most effective method for training SNNs, but a twist irony, on modern graphics processing units this becomes more expensive than non-spiking networks. The emergence Graphcore’s intelligence (IPUs) balances parallelized nature workloads sequential, reusable,...
A bstract We examine scenarios in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where pair-produced squarks and gluinos decay via two cascades, each ending a stable neutralino as Lightest Particle (LSP) (SM)-like Higgs boson, with mass spectra such that missing transverse energy, E T miss , is very low. Performing two-dimensional parameter scans focusing on hadronic $$ H\to b\overline{b} <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>H</mml:mi>...
This paper presents the first study of Graphcore's Intelligence Processing Unit (IPU) in context particle physics applications. The IPU is a new type processor optimised for machine learning. Comparisons are made neural-network-based event simulation, multiple-scattering correction, and flavour tagging, implemented on IPUs, GPUs CPUs, using variety neural network architectures hyperparameters. Additionally, Kálmán filter track reconstruction IPUs GPUs. results indicate that hold considerable...
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms energy consumption and latency when performing inference with deep learning workloads. Error backpropagation is presently regarded as the most effective method for training SNNs, but a twist irony, on modern graphics processing units (GPUs) this becomes more expensive than non-spiking networks. The emergence Graphcore's Intelligence Processing Units (IPUs) balances parallelized nature workloads sequential,...
Current AI training infrastructure is dominated by single instruction multiple data (SIMD) and systolic array architectures, such as Graphics Processing Units (GPUs) Tensor (TPUs), that excel at accelerating parallel workloads dense vector matrix multiplications. Potentially more efficient neural network models utilizing sparsity recurrence cannot leverage the full power of SIMD processor are thus a severe disadvantage compared to today's prominent architectures like Transformers CNNs,...
Abstract This paper presents the first study of Graphcore’s Intelligence Processing Unit (IPU) in context particle physics applications. The IPU is a new type processor optimised for machine learning. Comparisons are made neural-network-based event simulation, multiple-scattering correction, and flavour tagging, implemented on IPUs, GPUs CPUs, using variety neural network architectures hyperparameters. Additionally, Kálmán filter track reconstruction IPUs GPUs. results indicate that hold...
Current AI training infrastructure is dominated by single instruction multiple data (SIMD) and systolic array architectures, such as Graphics Processing Units (GPUs) Tensor (TPUs), that excel at accelerating parallel workloads dense vector matrix multiplications. Potentially more efficient neural network models utilizing sparsity recurrence cannot leverage the full power of SIMD processor are thus a severe disadvantage compared to today's prominent architectures like Transformers CNNs,...