Meire Fortunato

ORCID: 0009-0002-7058-4657
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Reinforcement Learning in Robotics
  • Parallel Computing and Optimization Techniques
  • Machine Learning and Algorithms
  • Domain Adaptation and Few-Shot Learning
  • Advanced Graph Neural Networks
  • Evolutionary Algorithms and Applications
  • Adversarial Robustness in Machine Learning
  • Natural Language Processing Techniques
  • Neural dynamics and brain function
  • Machine Learning and Data Classification
  • Neural Networks and Reservoir Computing
  • Machine Learning in Materials Science
  • Multimodal Machine Learning Applications
  • Graph Theory and Algorithms
  • Advanced Chemical Physics Studies
  • Advanced Numerical Methods in Computational Mathematics
  • Advanced Bandit Algorithms Research
  • Tropical and Extratropical Cyclones Research
  • Hydrological Forecasting Using AI
  • Algebraic and Geometric Analysis
  • advanced mathematical theories
  • Protein Structure and Dynamics
  • Computational Fluid Dynamics and Aerodynamics
  • Computational Physics and Python Applications
  • Simulation Techniques and Applications

DeepMind (United Kingdom)
2021-2023

Google (United Kingdom)
2023

Google (United States)
2017-2020

University of California, Berkeley
2015

We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of agent's policy can be used aid efficient exploration. The parameters are learned gradient descent along remaining network weights. NoisyNet is straightforward implement adds little computational overhead. find replacing conventional exploration heuristics for A3C, DQN dueling agents (entropy reward $ε$-greedy respectively) yields substantially...

10.48550/arxiv.1706.10295 preprint EN other-oa arXiv (Cornell University) 2017-01-01

Global medium-range weather forecasting is critical to decision-making across many social and economic domains. Traditional numerical prediction uses increased compute resources improve forecast accuracy but does not directly use historical data the underlying model. Here, we introduce GraphCast, a machine learning-based method trained from reanalysis data. It predicts hundreds of variables for next 10 days at 0.25° resolution globally in under 1 minute. GraphCast significantly outperforms...

10.1126/science.adi2336 article EN cc-by Science 2023-11-14

Density functional theory describes matter at the quantum level, but all popular approximations suffer from systematic errors that arise violation of mathematical properties exact functional. We overcame this fundamental limitation by training a neural network on molecular data and fictitious systems with fractional charge spin. The resulting functional, DM21 (DeepMind 21), correctly typical examples artificial delocalization strong correlation performs better than traditional functionals...

10.1126/science.abj6511 article EN Science 2021-12-09

Mesh-based simulations are central to modeling complex physical systems in many disciplines across science and engineering. Mesh representations support powerful numerical integration methods their resolution can be adapted strike favorable trade-offs between accuracy efficiency. However, high-dimensional scientific very expensive run, solvers parameters must often tuned individually each system studied. Here we introduce MeshGraphNets, a framework for learning mesh-based using graph neural...

10.48550/arxiv.2010.03409 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Global medium-range weather forecasting is critical to decision-making across many social and economic domains. Traditional numerical prediction uses increased compute resources improve forecast accuracy, but cannot directly use historical data the underlying model. We introduce a machine learning-based method called "GraphCast", which can be trained from reanalysis data. It predicts hundreds of variables, over 10 days at 0.25 degree resolution globally, in under one minute. show that...

10.48550/arxiv.2212.12794 preprint EN cc-by arXiv (Cornell University) 2022-01-01

In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, show that simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only small extra computational cost during training, also reducing the amount parameters by 80\%. Secondly, demonstrate how novel kind posterior approximation yields further improvements to performance Bayesian RNNs. We incorporate local...

10.48550/arxiv.1704.02798 preprint EN other-oa arXiv (Cornell University) 2017-01-01

We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding positions in input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence and Neural Turing Machines, because number target classes each step depends on length input, which is variable. Problems sorting variable sized sequences, various combinatorial optimization belong this class. Our model...

10.48550/arxiv.1506.03134 preprint EN other-oa arXiv (Cornell University) 2015-01-01

10.1016/j.jcp.2015.11.020 article EN publisher-specific-oa Journal of Computational Physics 2015-11-17

In recent years, there has been a growing interest in using machine learning to overcome the high cost of numerical simulation, with some learned models achieving impressive speed-ups over classical solvers whilst maintaining accuracy. However, these methods are usually tested at low-resolution settings, and it remains be seen whether they can scale costly high-resolution simulations that we ultimately want tackle. this work, propose two complementary approaches improve framework from...

10.48550/arxiv.2210.00612 preprint EN cc-by arXiv (Cornell University) 2022-01-01

Memory is an important aspect of intelligence and plays a role in many deep reinforcement learning models. However, little progress has been made understanding when specific memory systems help more than others how well they generalize. The field also yet to see prevalent consistent rigorous approach for evaluating agent performance on holdout data. In this paper, we aim develop comprehensive methodology test different kinds assess the can apply what it learns training set that differs from...

10.48550/arxiv.1910.13406 preprint EN other-oa arXiv (Cornell University) 2019-01-01

The measurement of time is central to intelligent behavior. We know that both animals and artificial agents can successfully use temporal dependencies select actions. In agents, little work has directly addressed (1) which architectural components are necessary for successful development this ability, (2) how timing ability comes be represented in the units actions agent, (3) whether resulting behavior system converges on solutions similar those biology. Here we studied interval abilities...

10.48550/arxiv.1905.13469 preprint EN other-oa arXiv (Cornell University) 2019-01-01
Coming Soon ...