Alexander Titterton

ORCID: 0000-0001-5711-3899
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • High-Energy Particle Collisions Research
  • Quantum Chromodynamics and Particle Interactions
  • Particle Detector Development and Performance
  • Advanced Memory and Neural Computing
  • Computational Physics and Python Applications
  • Machine Learning in Materials Science
  • Ferroelectric and Negative Capacitance Devices
  • Dark Matter and Cosmic Phenomena
  • Cosmology and Gravitation Theories
  • Advanced Electron Microscopy Techniques and Applications
  • Neutrino Physics Research
  • Neural dynamics and brain function
  • Advanced Neural Network Applications
  • Radiation Detection and Scintillator Technologies
  • Stochastic Gradient Optimization Techniques
  • Distributed and Parallel Computing Systems
  • Photoreceptor and optogenetics research
  • Black Holes and Theoretical Physics
  • Parallel Computing and Optimization Techniques

Graphcore (United Kingdom)
2021-2024

University of Bristol
2018-2023

Abstract Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms energy consumption and latency when performing inference with deep learning workloads. Error backpropagation is presently regarded as the most effective method for training SNNs, but a twist irony, on modern graphics processing units this becomes more expensive than non-spiking networks. The emergence Graphcore’s intelligence (IPUs) balances parallelized nature workloads sequential, reusable,...

10.1088/2634-4386/ad2373 article EN cc-by Neuromorphic Computing and Engineering 2024-01-29

A bstract We examine scenarios in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where pair-produced squarks and gluinos decay via two cascades, each ending a stable neutralino as Lightest Particle (LSP) (SM)-like Higgs boson, with mass spectra such that missing transverse energy, E T miss , is very low. Performing two-dimensional parameter scans focusing on hadronic $$ H\to b\overline{b} <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>H</mml:mi>...

10.1007/jhep10(2018)064 article EN cc-by Journal of High Energy Physics 2018-10-01

This paper presents the first study of Graphcore's Intelligence Processing Unit (IPU) in context particle physics applications. The IPU is a new type processor optimised for machine learning. Comparisons are made neural-network-based event simulation, multiple-scattering correction, and flavour tagging, implemented on IPUs, GPUs CPUs, using variety neural network architectures hyperparameters. Additionally, Kálmán filter track reconstruction IPUs GPUs. results indicate that hold considerable...

10.48550/arxiv.2008.09210 preprint EN cc-by-sa arXiv (Cornell University) 2020-01-01

Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms energy consumption and latency when performing inference with deep learning workloads. Error backpropagation is presently regarded as the most effective method for training SNNs, but a twist irony, on modern graphics processing units (GPUs) this becomes more expensive than non-spiking networks. The emergence Graphcore's Intelligence Processing Units (IPUs) balances parallelized nature workloads sequential,...

10.48550/arxiv.2211.10725 preprint EN cc-by arXiv (Cornell University) 2022-01-01

Current AI training infrastructure is dominated by single instruction multiple data (SIMD) and systolic array architectures, such as Graphics Processing Units (GPUs) Tensor (TPUs), that excel at accelerating parallel workloads dense vector matrix multiplications. Potentially more efficient neural network models utilizing sparsity recurrence cannot leverage the full power of SIMD processor are thus a severe disadvantage compared to today's prominent architectures like Transformers CNNs,...

10.1609/aaai.v38i11.29087 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

Abstract This paper presents the first study of Graphcore’s Intelligence Processing Unit (IPU) in context particle physics applications. The IPU is a new type processor optimised for machine learning. Comparisons are made neural-network-based event simulation, multiple-scattering correction, and flavour tagging, implemented on IPUs, GPUs CPUs, using variety neural network architectures hyperparameters. Additionally, Kálmán filter track reconstruction IPUs GPUs. results indicate that hold...

10.1007/s41781-021-00057-z article EN cc-by Computing and Software for Big Science 2021-03-17

Current AI training infrastructure is dominated by single instruction multiple data (SIMD) and systolic array architectures, such as Graphics Processing Units (GPUs) Tensor (TPUs), that excel at accelerating parallel workloads dense vector matrix multiplications. Potentially more efficient neural network models utilizing sparsity recurrence cannot leverage the full power of SIMD processor are thus a severe disadvantage compared to today's prominent architectures like Transformers CNNs,...

10.48550/arxiv.2311.04386 preprint EN cc-by arXiv (Cornell University) 2023-01-01
Coming Soon ...