Alexander Trott

ORCID: 0000-0001-5389-2602
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Energy, Environment, and Transportation Policies
  • Reinforcement Learning in Robotics
  • Auction Theory and Applications
  • Domain Adaptation and Few-Shot Learning
  • Multimodal Machine Learning Applications
  • Complex Systems and Time Series Analysis
  • Economic Policies and Impacts
  • Housing Market and Economics
  • Visual perception and processing mechanisms
  • Advanced Bandit Algorithms Research
  • Financial Literacy, Pension, Retirement Analysis
  • Neurobiology and Insect Physiology Research
  • Advanced Image and Video Retrieval Techniques
  • Neural dynamics and brain function
  • Experimental Behavioral Economics Studies
  • Energy Efficiency and Management
  • Advanced Causal Inference Techniques
  • Mobile Crowdsensing and Crowdsourcing
  • Animal Behavior and Reproduction
  • Quality and Management Systems
  • Complex Systems and Decision Making
  • Insect and Arachnid Ecology and Behavior
  • Agricultural risk and resilience
  • Simulation Techniques and Applications
  • Sharing Economy and Platforms

Salesforce (United States)
2019-2023

Mosaic
2023

Harvard University
2014-2015

Brandeis University
2012

Artificial intelligence (AI) and reinforcement learning (RL) have improved many areas but are not yet widely adopted in economic policy design, mechanism or economics at large. The AI Economist is a two-level, deep RL framework for design which agents social planner coadapt. In particular, the uses structured curriculum to stabilize challenging coadaptive problem. We validate this domain of taxation. one-step economies, recovers optimal tax theory. spatiotemporal substantially improves both...

10.1126/sciadv.abk2607 article EN cc-by-nc Science Advances 2022-05-04

Tackling real-world socio-economic challenges requires designing and testing economic policies. However, this is hard in practice, due to a lack of appropriate (micro-level) data limited opportunity experiment. In work, we train social planners that discover tax policies dynamic economies can effectively trade-off equality productivity. We propose two-level deep reinforcement learning approach learn policies, based on simulations which both agents government adapt. Our data-driven does not...

10.48550/arxiv.2004.13332 preprint EN other-oa arXiv (Cornell University) 2020-01-01

In primary visual cortex (V1), neuronal responses are sensitive to context. For example, stimuli presented within the receptive field (RF) center often suppressed by RF surround, and this suppression tends be strongest when surround match. We sought identify mechanism that gives rise these properties of modulation. To do so, we exploited stability implanted multielectrode arrays record from neurons in V1 alert monkeys with multiple stimulus sets more exhaustively probed center-surround...

10.1523/jneurosci.4000-14.2015 article EN cc-by-nc-sa Journal of Neuroscience 2015-03-25

Questions that require counting a variety of objects in images remain major challenge visual question answering (VQA). The most common approaches to VQA involve either classifying answers based on fixed length representations both the image and or summing fractional counts estimated from each section image. In contrast, we treat as sequential decision process force our model make discrete choices what count. Specifically, sequentially selects detected learns interactions between influence...

10.48550/arxiv.1712.08697 preprint EN other-oa arXiv (Cornell University) 2017-01-01

Deep learning has achieved remarkable successes in solving challenging reinforcement (RL) problems when dense reward function is provided. However, sparse environment it still often suffers from the need to carefully shape guide policy optimization. This limits applicability of RL real world since both and domain-specific knowledge are required. It therefore great practical importance develop algorithms which can learn a binary signal indicating successful task completion or other unshaped,...

10.48550/arxiv.1902.00528 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Mate selection is critical to ensuring the survival of a species. In fruit fly, Drosophila melanogaster, genetic and anatomical studies have focused on mate recognition courtship initiation for decades. This model system has proven be highly amenable study neural systems controlling decision making process. However, much less known about how quality regulated in temporally dynamic manner males female assesses male performance as she makes her whether accept copulation. Here, we report that...

10.1371/journal.pone.0046025 article EN cc-by PLoS ONE 2012-09-25

We study the behavior of an economic platform (e.g., Amazon, Uber Eats, Instacart) under shocks, such as COVID-19 lockdowns, and effect different regulation considerations. To this end, we develop a multi-agent simulation environment economy in multi-period setting where shocks may occur disrupt economy. Buyers sellers are heterogeneous modeled economically-motivated agents, choosing whether or not to pay fees access platform. use deep reinforcement learning model fee-setting matching...

10.1145/3543507.3583523 article EN Proceedings of the ACM Web Conference 2022 2023-04-26

Optimizing economic and public policy is critical to address socioeconomic issues trade-offs, e.g., improving equality, productivity, or wellness, poses a complex mechanism design problem.A designer needs consider multiple objectives, levers, behavioral responses from strategic actors who optimize for their individual objectives.Moreover, real-world policies should be explainable robust simulation-to-reality gaps, due calibration issues.Existing approaches are often limited narrow set of...

10.2139/ssrn.3900237 article EN SSRN Electronic Journal 2021-01-01

AI and reinforcement learning (RL) have improved many areas, but are not yet widely adopted in economic policy design, mechanism or economics at large.At the same time, current methodology is limited by a lack of counterfactual data, simplistic behavioral models, opportunities to experiment with policies evaluate responses.Here we show that machine-learning-based simulation powerful design framework overcome these limitations.The Economist two-level, deep RL trains both agents social planner...

10.2139/ssrn.3900018 article EN SSRN Electronic Journal 2021-01-01

Optimizing economic and public policy is critical to address socioeconomic issues trade-offs, e.g., improving equality, productivity, or wellness, poses a complex mechanism design problem. A designer needs consider multiple objectives, levers, behavioral responses from strategic actors who optimize for their individual objectives. Moreover, real-world policies should be explainable robust simulation-to-reality gaps, due calibration issues. Existing approaches are often limited narrow set of...

10.48550/arxiv.2108.02904 preprint EN other-oa arXiv (Cornell University) 2021-01-01

AI and reinforcement learning (RL) have improved many areas, but are not yet widely adopted in economic policy design, mechanism or economics at large. At the same time, current methodology is limited by a lack of counterfactual data, simplistic behavioral models, opportunities to experiment with policies evaluate responses. Here we show that machine-learning-based simulation powerful design framework overcome these limitations. The Economist two-level, deep RL trains both agents social...

10.48550/arxiv.2108.02755 preprint EN other-oa arXiv (Cornell University) 2021-01-01

We study the problem of training a principal in multi-agent general-sum game using reinforcement learning (RL). Learning robust policy requires anticipating worst possible strategic responses other agents, which is generally NP-hard. However, we show that no-regret dynamics can identify these worst-case poly-time smooth games. propose framework uses this evaluation method for efficiently RL. This be extended to provide robustness boundedly rational agents too. Our motivating application...

10.1609/aaai.v37i10.26391 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2023-06-26
Coming Soon ...