Kyriacos Nikiforou

ORCID: 0000-0002-1504-5725
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Neural dynamics and brain function
  • Neuroscience and Music Perception
  • Neural Networks and Applications
  • Visual perception and processing mechanisms
  • Domain Adaptation and Few-Shot Learning
  • Multisensory perception and integration
  • Machine Learning and Algorithms
  • Multimodal Machine Learning Applications
  • Machine Learning and Data Classification
  • Human Pose and Action Recognition
  • Explainable Artificial Intelligence (XAI)
  • Reinforcement Learning in Robotics
  • Memory and Neural Mechanisms
  • Neural Networks and Reservoir Computing
  • Data Stream Mining Techniques
  • Minerals Flotation and Separation Techniques
  • Advanced Neural Network Applications
  • Calcium Carbonate Crystallization and Inhibition
  • Advanced Memory and Neural Computing
  • Advanced Graph Neural Networks
  • stochastic dynamics and bifurcation
  • Neural and Behavioral Psychology Studies
  • Petroleum Processing and Analysis

Imperial College London
2015-2022

DeepMind (United Kingdom)
2019

Despite being a fundamental dimension of experience, how the human brain generates perception time remains unknown. Here, we provide novel explanation for might be accomplished, based on non-temporal perceptual classification processes. To demonstrate this proposal, build an artificial neural system centred feed-forward image network, functionally similar to visual processing. In system, input videos natural scenes drive changes in network activation, and accumulation salient activation are...

10.1038/s41467-018-08194-7 article EN cc-by Nature Communications 2019-01-11

Abstract Human perception and experience of time are strongly influenced by ongoing stimulation, memory past experiences, required task context. When paying attention to time, seems expand; when distracted, it contract. considering based on memory, the may be different than what is in moment, exemplified sayings like “time flies you're having fun.” Experience also depends content perceptual experience—rapidly changing or complex scenes seem longer duration less dynamic ones. The complexity...

10.1162/neco_a_01514 article EN Neural Computation 2022-06-07

With a view to bridging the gap between deep learning and symbolic AI, we present novel end-to-end neural network architecture that learns form propositional representations with an explicitly relational structure from raw pixel data. In order evaluate analyse architecture, introduce family of simple visual reasoning tasks varying complexity. We show proposed when pre-trained on curriculum such tasks, generate reusable better facilitate subsequent previously unseen compared number baseline...

10.48550/arxiv.1905.10307 preprint EN other-oa arXiv (Cornell University) 2019-01-01

The neural basis of time perception remains unknown. A prominent account is the pacemaker-accumulator model, wherein regular ticks some physiological or pacemaker are read out as time. Putative candidates for have been suggested in processes (heartbeat), dopaminergic mid-brain neurons, whose activity has associated with spontaneous blinking. However, such proposals difficulty accounting observations that varies systematically perceptual content. We examined influences on human duration...

10.1525/collabra.234 article EN cc-by Collabra Psychology 2019-01-01

Recently developed deep learning models are able to learn segment scenes into component objects without supervision. This opens many new and exciting avenues of research, allowing agents take (or entities) as inputs, rather that pixels. Unfortunately, while these provide excellent segmentation a single frame, they do not keep track how segmented at one time-step correspond align) those later time-step. The alignment correspondence) problem has impeded progress towards using object...

10.48550/arxiv.2007.08973 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Abstract Human perception and experience of time is strongly influenced by ongoing stimulation, memory past experiences, required task context. When paying attention to time, seems expand; when distracted, it contract. considering based on memory, the may be different than in moment, exemplified sayings like “time flies you’re having fun”. Experience also depends content perceptual – rapidly changing or complex scenes seem longer duration less dynamic ones. The complexity interactions...

10.1101/2020.02.17.953133 preprint EN cc-by-nc bioRxiv (Cold Spring Harbor Laboratory) 2020-02-17

The cornerstone of neural algorithmic reasoning is the ability to solve tasks, especially in a way that generalises out distribution. While recent years have seen surge methodological improvements this area, they mostly focused on building specialist models. Specialist models are capable learning neurally execute either only one algorithm or collection algorithms with identical control-flow backbone. Here, instead, we focus constructing generalist learner -- single graph network processor...

10.48550/arxiv.2209.11142 preprint EN cc-by arXiv (Cornell University) 2022-01-01

Despite being a fundamental dimension of experience, how the human brain generates perception time remains unknown. Here, we provide novel explanation for might be accomplished, based on non-temporal perceptual clas-sification processes. To demonstrate this proposal, built an artificial neural system centred feed-forward image classification network, functionally similar to visual processing. In system, input videos natural scenes drive changes in network activation, and accumulation salient...

10.1101/172387 preprint EN cc-by-nc bioRxiv (Cold Spring Harbor Laboratory) 2017-08-04

We present a novel neural episodic memory architecture that utilizes reservoir computing to extract and recall information gleaned over time from multilayer perceptron receives sensory input. Reservoir models project input data into high-dimensional dynamical space also serve as fading holds on past inputs thereby enabling the direct association of current with past. The presented these capabilities via an abstract feedback mechanism in doing so creates attractor-like states within are...

10.1109/ijcnn.2016.7727887 article EN 2022 International Joint Conference on Neural Networks (IJCNN) 2016-07-01

The neural basis of time perception remains unknown. A prominent account is the pacemaker-accumulator model, wherein regular ticks some physiological or pacemaker are read out as time. Putative candidates for have been suggested in processes (heartbeat), dopaminergic mid-brain neurons, whose activity has associated with spontaneous blinking. However, such proposals difficulty accounting observations that varies systematically perceptual content. We examined influences on human duration...

10.31234/osf.io/zste8 preprint EN 2018-12-08

Continuous-time recurrent neural networks are widely used as models of dynamics and also have applications in machine learning. But their not yet well understood, especially when they driven by external stimuli. In this article, we study the response stable unstable to different harmonically oscillating stimuli varying a parameter ρ, ratio between timescale network stimulus, use dimensionality network's attractor an estimate complexity response. Additionally, propose novel technique for...

10.1007/s12559-017-9464-6 article EN cc-by Cognitive Computation 2017-04-06

Many important tasks are defined in terms of object. To generalize across these tasks, a reinforcement learning (RL) agent needs to exploit the structure that objects induce. Prior work has either hard-coded object-centric features, used complex generative models, or updated state using local spatial features. However, approaches have had limited success enabling general RL agents. Motivated by this, we introduce "Feature-Attending Recurrent Modules" (FARM), an architecture for...

10.48550/arxiv.2112.08369 preprint EN cc-by arXiv (Cornell University) 2021-01-01

Recent work on neural algorithmic reasoning has investigated the capabilities of networks, effectively demonstrating they can learn to execute classical algorithms unseen data coming from train distribution. However, performance existing reasoners significantly degrades out-of-distribution (OOD) test data, where inputs have larger sizes. In this work, we make an important observation: there are many different for which algorithm will perform certain intermediate computations identically....

10.48550/arxiv.2302.10258 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Hierarchical Reinforcement Learning (HRL) agents have the potential to demonstrate appealing capabilities such as planning and exploration with abstraction, transfer, skill reuse. Recent successes HRL across different domains provide evidence that practical, effective are possible, even if existing do not yet fully realize of HRL. Despite these successes, visually complex partially observable 3D environments remained a challenge for agents. We address this issue Hybrid Offline-Online (H2O2),...

10.48550/arxiv.2302.14451 preprint EN cc-by arXiv (Cornell University) 2023-01-01
Coming Soon ...