Jan E. Gerken

ORCID: 0000-0002-0172-7944
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Black Holes and Theoretical Physics
  • Algebraic structures and combinatorial models
  • Particle physics theoretical and experimental studies
  • Advanced Algebra and Geometry
  • Neural Networks and Applications
  • Quantum Chromodynamics and Particle Interactions
  • Topological and Geometric Data Analysis
  • Generative Adversarial Networks and Image Synthesis
  • Model Reduction and Neural Networks
  • Medical Imaging and Analysis
  • Medical Image Segmentation Techniques
  • Image Enhancement Techniques
  • Optical Polarization and Ellipsometry
  • Advanced Combinatorial Mathematics
  • Satellite Image Processing and Photogrammetry
  • Video Surveillance and Tracking Methods
  • Advanced Mathematical Identities
  • Advanced Neural Network Applications
  • Electromagnetic Scattering and Analysis
  • Advanced Image and Video Retrieval Techniques
  • Image and Signal Denoising Methods
  • Advanced Numerical Analysis Techniques
  • Explainable Artificial Intelligence (XAI)
  • Opinion Dynamics and Social Influence
  • Computational Physics and Python Applications

Chalmers University of Technology
2021-2024

University of Gothenburg
2023-2024

Max Planck Institute for Gravitational Physics
2019-2021

Max Planck Society
2020

Abstract We survey the mathematical foundations of geometric deep learning, focusing on group equivariant and gauge neural networks. develop convolutional networks arbitrary manifolds $$\mathcal {M}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>M</mml:mi> </mml:math> using principal bundles with structure K maps between sections associated vector bundles. also discuss for homogeneous spaces {M}=G/K$$ <mml:mrow> <mml:mo>=</mml:mo> <mml:mi>G</mml:mi> <mml:mo>/</mml:mo>...

10.1007/s10462-023-10502-7 article EN cc-by Artificial Intelligence Review 2023-06-04

A bstract We investigate one-loop four-point scattering of non-abelian gauge bosons in heterotic string theory and identify new connections with the corresponding open-string amplitude. In low-energy expansion heterotic-string amplitude, integrals over torus punctures are systematically evaluated terms modular graph forms, certain non-holomorphic forms. For a specific integral, forms related to elliptic multiple zeta values from analogous integrations cylinder boundaries. The detailed...

10.1007/jhep01(2019)052 article EN cc-by Journal of High Energy Physics 2019-01-01

A bstract We study generating series of torus integrals that contain all so-called modular graph forms relevant for massless one-loop closed-string amplitudes. By analysing the differential equation we construct a solution their low-energy expansion to orders in inverse string tension α′ . Our is expressed through initial data involving multiple zeta values and certain real-analytic functions parameter torus. These are built from real imaginary parts holomorphic iterated Eisenstein should be...

10.1007/jhep07(2020)190 article EN cc-by Journal of High Energy Physics 2020-07-01

A bstract Modular graph forms are a class of modular covariant functions which appear in the genus-one contribution to low-energy expansion closed string scattering amplitudes. with holomorphic subgraphs enjoy simplifying property that they may be reduced sums products strictly lower loop order. In particular case dihedral forms, form expression for this subgraph reduction was obtained previously by D’Hoker and Green. current work, we extend these results trihedral forms. Doing so involves...

10.1007/jhep01(2019)131 article EN cc-by Journal of High Energy Physics 2019-01-01

Abstract We relate the low-energy expansions of world-sheet integrals in genus-one amplitudes open- and closed-string states. The respective expansion coefficients are elliptic multiple zeta values (eMZVs) open-string case non-holomorphic modular forms dubbed ‘modular graph (MGFs)’ for closed strings. By inspecting differential equations degeneration limits suitable generating series integrals, we identify formal substitution rules mapping eMZVs open strings to MGFs Based on properties these...

10.1088/1751-8121/abe58b article EN cc-by Journal of Physics A Mathematical and Theoretical 2021-02-11

Counterfactuals can explain classification decisions of neural networks in a human interpretable way. We propose simple but effective method to generate such counterfactuals. More specifically, we perform suitable diffeomorphic coordinate transformation and then gradient ascent these coordinates find counterfactuals which are classified with great confidence as specified target class. two methods leverage generative models construct systems that either exactly or approximately diffeomorphic....

10.1109/tpami.2023.3339980 article EN cc-by IEEE Transactions on Pattern Analysis and Machine Intelligence 2023-12-06

Modular graph forms (MGFs) are a class of non-holomorphic modular which naturally appear in the low-energy expansion closed-string genus-one amplitudes and have generated considerable interest from pure mathematicians. MGFs satisfy numerous non-trivial algebraic- differential relations been studied extensively literature lead to significant simplifications. In this paper, we systematically combine these obtain basis decompositions all two- three-point total weight $w+\bar{w}\leq12$, starting...

10.1088/1751-8121/abbdf2 article EN cc-by Journal of Physics A Mathematical and Theoretical 2020-10-02

We survey the mathematical foundations of geometric deep learning, focusing on group equivariant and gauge neural networks. develop convolutional networks arbitrary manifolds $\mathcal{M}$ using principal bundles with structure $K$ maps between sections associated vector bundles. also discuss for homogeneous spaces $\mathcal{M}=G/K$, which are instead respect to global symmetry $G$ $\mathcal{M}$. Group layers can be interpreted as intertwiners induced representations $G$, we show their...

10.48550/arxiv.2105.13926 preprint EN other-oa arXiv (Cornell University) 2021-01-01

In dieser Dissertation untersuchen wir die Niedrigenergieentwicklung von Streuamplituden geschlossener Strings auf Einschleifenniveau (d.h. Genus eins) in einem zehndimensionalen Minkowski-Hintergrund mit Hilfe einer speziellen Klasse Funktionen, den sogenannten modularen Graphenformen. Diese erlauben eine systematische Berechnung der und erfullen viele nicht-triviale algebraische- Differentialgleichungen. Wir studieren diese Relationen detailliert leiten Basiszerlegungen fur grose Zahl...

10.18452/21829; preprint DE arXiv (Cornell University) 2020-09-04

We demonstrate that deep ensembles are secretly equivariant models. More precisely, we show become for all inputs and at training times by simply using data augmentation. Crucially, equivariance holds off-manifold any architecture in the infinite width limit. The is emergent sense predictions of individual ensemble members not but their collective prediction is. Neural tangent kernel theory used to derive this result verify our theoretical insights detailed numerical experiments.

10.48550/arxiv.2403.03103 preprint EN arXiv (Cornell University) 2024-03-05

10.1109/cvpr52733.2024.00580 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024-06-16

Equivariant neural networks have in recent years become an important technique for guiding architecture selection with many applications domains ranging from medical image analysis to quantum chemistry. In particular, as the most general linear equivariant layers respect regular representation, group convolutions been highly impactful numerous applications. Although architectures studied extensively, much less is known about training dynamics of networks. Concurrently, tangent kernels (NTKs)...

10.48550/arxiv.2406.06504 preprint EN arXiv (Cornell University) 2024-06-10

In this thesis, we investigate the low-energy expansion of scattering amplitudes closed strings at one-loop level (i.e. genus one) in a ten-dimensional Minkowski background using special class functions called modular graph forms. These allow for systematic evaluation and satisfy many non-trivial algebraic differential relations. We study these relations detail, leading to basis decompositions large number forms which greatly reduce complexity expansions integrals appearing amplitude. One...

10.48550/arxiv.2011.08647 preprint EN other-oa arXiv (Cornell University) 2020-01-01

We analyze the role of rotational equivariance in convolutional neural networks (CNNs) applied to spherical images. compare performance group equivariant known as S2CNNs and standard non-equivariant CNNs trained with an increasing amount data augmentation. The chosen architectures can be considered baseline references for respective design paradigms. Our models are evaluated on single or multiple items from MNIST FashionMNIST dataset projected onto sphere. For task image classification,...

10.48550/arxiv.2202.03990 preprint EN other-oa arXiv (Cornell University) 2022-01-01

High-resolution wide-angle fisheye images are becoming more and important for robotics applications such as autonomous driving. However, using ordinary convolutional neural networks or vision transformers on this data is problematic due to projection distortion losses introduced when projecting a rectangular grid the plane. We introduce HEAL-SWIN transformer, which combines highly uniform Hierarchical Equal Area iso-Latitude Pixelation (HEALPix) used in astrophysics cosmology with...

10.48550/arxiv.2307.07313 preprint EN other-oa arXiv (Cornell University) 2023-01-01

We relate the low-energy expansions of world-sheet integrals in genus-one amplitudes open- and closed-string states. The respective expansion coefficients are elliptic multiple zeta values open-string case non-holomorphic modular forms dubbed "modular graph forms" for closed strings. By inspecting differential equations degeneration limits suitable generating series integrals, we identify formal substitution rules mapping open strings to Based on properties these rules, refer them as an...

10.48550/arxiv.2010.10558 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Counterfactuals can explain classification decisions of neural networks in a human interpretable way. We propose simple but effective method to generate such counterfactuals. More specifically, we perform suitable diffeomorphic coordinate transformation and then gradient ascent these coordinates find counterfactuals which are classified with great confidence as specified target class. two methods leverage generative models construct systems that either exactly or approximately diffeomorphic....

10.48550/arxiv.2206.05075 preprint EN cc-by-nc-sa arXiv (Cornell University) 2022-01-01
Coming Soon ...