Sophia Sanborn

ORCID: 0000-0002-1957-7067
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Topological and Geometric Data Analysis
  • Neural dynamics and brain function
  • Neural Networks and Reservoir Computing
  • Advanced Memory and Neural Computing
  • Child and Animal Learning Development
  • Neural Networks and Applications
  • Cell Image Analysis Techniques
  • AI-based Problem Solving and Planning
  • Explainable Artificial Intelligence (XAI)
  • Morphological variations and asymmetry
  • Plant and Biological Electrophysiology Studies
  • Advanced Software Engineering Methodologies
  • Chaos-based Image/Signal Encryption
  • Evolutionary Algorithms and Applications
  • Human-Automation Interaction and Safety
  • Social Robot Interaction and HRI
  • Advanced Graph Neural Networks
  • Cognitive Science and Education Research
  • Geological Modeling and Analysis
  • Sensory Analysis and Statistical Methods
  • Algorithms and Data Compression
  • Advanced Data Compression Techniques
  • Teaching and Learning Programming
  • Mathematical Analysis and Transform Methods
  • Behavioral and Psychological Studies

Stanford University
2024

University of California, Santa Barbara
2023

Intel (United States)
2021-2022

University of California, Berkeley
2017-2022

Berkeley College
2018

General Electric (United States)
2005

The biologically inspired spiking neurons used in neuromorphic computing are nonlinear filters with dynamic state variables—very different from the stateless neuron models deep learning. next version of Intel's research processor, Loihi 2, supports a wide range stateful fully programmable dynamics. Here we showcase advanced that can be to efficiently process streaming data simulation experiments on emulated 2 hardware. In one example, Resonate-and-Fire (RF) compute Short Time Fourier...

10.1109/sips52927.2021.00053 article EN 2021-10-01

For adults, "no" and "not" change the truth-value of sentences they compose with. To investigate children's emerging understanding these words, an experimenter hid a ball in bucket or truck, then gave affirmative negative clue (Experiment 1: "It's not bucket"; Experiment 2: "Is it bucket?"; "No, it's not"). Replicating Austin, Theakston, Lieven, & Tomasello (2014), children only understood logical after age two, long say but around time use both words to deny statements. whether this simply...

10.1080/15475441.2017.1317253 article EN Language Learning and Development 2017-06-28

The natural world is full of complex systems characterized by intricate relations between their components: from social interactions individuals in a network to electrostatic atoms protein. Topological Deep Learning (TDL) provides comprehensive framework process and extract knowledge data associated with these systems, such as predicting the community which an individual belongs or whether protein can be reasonable target for drug development. TDL has demonstrated theoretical practical...

10.48550/arxiv.2304.10031 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Abstract Generalization is a fundamental problem solved by every cognitive system in essentially domain. Although it known that how people generalize varies complex ways depending on the context or domain, an open question learn appropriate way to for new context. To understand this capability, we cast of learning as hypothesis space generalization. We propose normative mathematical framework inductive biases which properties are relevant generalization domain from statistical structure...

10.1111/cogs.12777 article EN Cognitive Science 2019-08-01

The enduring legacy of Euclidean geometry underpins classical machine learning, which, for decades, has been primarily developed data lying in space. Yet, modern learning increasingly encounters richly structured that is inherently nonEuclidean. This can exhibit intricate geometric, topological and algebraic structure: from the curvature space-time, to topologically complex interactions between neurons brain, transformations describing symmetries physical systems. Extracting knowledge such...

10.48550/arxiv.2407.09468 preprint EN arXiv (Cornell University) 2024-07-12

The neural manifold hypothesis postulates that the activity of a population forms low-dimensional whose structure reflects encoded task variables. In this work, we combine topological deep generative models and extrinsic Riemannian geometry to introduce novel approach for studying manifolds. This (i) computes an explicit parameterization manifolds (ii) estimates their local curvature—hence quantifying shape within state space. Importantly, prove our methodology is invariant with respect...

10.1109/cvprw59228.2023.00068 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2023-06-01

Human behavior is inherently hierarchical, resulting from the decomposition of a task into subtasks or an abstract action concrete actions. However, typically measured as sequence actions, which makes it difficult to infer its hierarchical structure. In this paper, we explore how people form hierarchically-structured plans, using experimental paradigm that representations observable: participants create programs produce sequences actions in language with explicit This lets us test two...

10.48550/arxiv.2311.18644 preprint EN cc-by arXiv (Cornell University) 2023-01-01

The importance of hierarchically structured representations for tractable planning has long been acknowledged. However, the questions how people discover such abstractions and to define a set optimal remain open. This problem explored in cognitive science solving literature computer hierarchical reinforcement learning. Here, we emphasize an algorithmic perspective on learning which objective is efficiently encode structure problem, or, equivalently, learn algorithm with minimal length. We...

10.48550/arxiv.1807.07134 preprint EN other-oa arXiv (Cornell University) 2018-01-01

10.32470/ccn.2018.1265-0 article EN 2022 Conference on Cognitive Computational Neuroscience 2018-01-01

An important problem in signal processing and deep learning is to achieve \textit{invariance} nuisance factors not relevant for the task. Since many of these are describable as action a group $G$ (e.g. rotations, translations, scalings), we want methods be $G$-invariant. The $G$-Bispectrum extracts every characteristic given up action: example, shape an object image, but its orientation. Consequently, has been incorporated into neural network architectures computational primitive...

10.48550/arxiv.2407.07655 preprint EN arXiv (Cornell University) 2024-07-10

This letter presents an algorithm that uses time-series statistical techniques to analyze the information content changes of individual signal time series subjected data compression. Autocorrelation-based methods are used residuals between original and processed signals. T-statistic analysis Autocorrelation Function (ACF) coefficients is determine Upper Specification Limit (USL) compression ratio. The approach provides a continuous quality measurement useful in datalogging systems design...

10.1109/lsp.2004.842264 article EN IEEE Signal Processing Letters 2005-02-22

In this work, we formally prove that, under certain conditions, if a neural network is invariant to finite group then its weights recover the Fourier transform on that group. This provides mathematical explanation for emergence of features -- ubiquitous phenomenon in both biological and artificial learning systems. The results hold even non-commutative groups, which case encodes all irreducible unitary representations. Our findings have consequences problem symmetry discovery. Specifically,...

10.48550/arxiv.2312.08550 preprint EN cc-by arXiv (Cornell University) 2023-01-01

The biologically inspired spiking neurons used in neuromorphic computing are nonlinear filters with dynamic state variables -- very different from the stateless neuron models deep learning. next version of Intel's research processor, Loihi 2, supports a wide range stateful fully programmable dynamics. Here we showcase advanced that can be to efficiently process streaming data simulation experiments on emulated 2 hardware. In one example, Resonate-and-Fire (RF) compute Short Time Fourier...

10.48550/arxiv.2111.03746 preprint EN other-oa arXiv (Cornell University) 2021-01-01

We present a neural network architecture, Bispectral Neural Networks (BNNs) for learning representations that are invariant to the actions of compact commutative groups on space over which signal is defined. The model incorporates ansatz bispectrum, an analytically defined group complete -- is, it preserves all structure while removing only variation due actions. Here, we demonstrate BNNs able simultaneously learn groups, their irreducible representations, and corresponding equivariant...

10.48550/arxiv.2209.03416 preprint EN cc-by-nc-nd arXiv (Cornell University) 2022-01-01

This paper presents the computational challenge on topological deep learning that was hosted within ICML 2023 Workshop Topology and Geometry in Machine Learning. The competition asked participants to provide open-source implementations of neural networks from literature by contributing python packages TopoNetX (data processing) TopoModelX (deep learning). attracted twenty-eight qualifying submissions its two-month duration. describes design summarizes main findings.

10.48550/arxiv.2309.15188 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Single neurons in neural networks are often interpretable that they represent individual, intuitively meaningful features. However, many exhibit $\textit{mixed selectivity}$, i.e., multiple unrelated A recent hypothesis proposes features deep may be represented $\textit{superposition}$, on non-orthogonal axes by neurons, since the number of possible natural data is generally larger than a given network. Accordingly, we should able to find directions activation space not aligned with...

10.48550/arxiv.2310.11431 preprint EN cc-by arXiv (Cornell University) 2023-01-01

We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks ($G$-CNNs), which we call the $G$-triple-correlation ($G$-TC) layer. The approach leverages theory of triple-correlation on groups, is unique, lowest-degree polynomial invariant map that also complete. Many commonly used maps--such as max--are incomplete: they remove both group and signal structure. A complete invariant, by contrast, removes only variation due to actions...

10.48550/arxiv.2310.18564 preprint EN cc-by arXiv (Cornell University) 2023-01-01

This paper presents the computational challenge on differential geometry and topology that was hosted within ICLR 2022 workshop ``Geometric Topological Representation Learning". The competition asked participants to provide implementations of machine learning algorithms manifolds would respect API open-source software Geomstats (manifold part) Scikit-Learn (machine or PyTorch. attracted seven teams in its two month duration. describes design summarizes main findings.

10.48550/arxiv.2206.09048 preprint EN cc-by arXiv (Cornell University) 2022-01-01
Coming Soon ...