Reinforcement Learning with Hidden Markov Models for Discovering Decision-Making Dynamics

Methodology (stat.ME) FOS: Computer and information sciences Computer Science - Machine Learning Statistics - Machine Learning Applications (stat.AP) Machine Learning (stat.ML) Statistics - Applications Statistics - Methodology Machine Learning (cs.LG)
DOI: 10.48550/arxiv.2401.13929 Publication Date: 2024-01-01
ABSTRACT
Major depressive disorder (MDD) presents challenges in diagnosis and treatment due to its complex heterogeneous nature. Emerging evidence indicates that reward processing abnormalities may serve as a behavioral marker for MDD. To measure processing, patients perform computer-based tasks involve making choices or responding stimulants are associated with different outcomes. Reinforcement learning (RL) models fitted extract parameters various aspects of characterize how make decisions tasks. Recent findings suggest the inadequacy characterizing solely based on single RL model; instead, there be switching decision-making processes between multiple strategies. An important scientific question is dynamics strategies affect ability individuals Motivated by probabilistic task (PRT) within EMBARC study, we propose novel RL-HMM framework analyzing reward-based decision-making. Our model accommodates strategy two distinct approaches under hidden Markov (HMM): subjects opting random choices. We account continuous state space allow time-varying transition probabilities HMM. introduce computationally efficient EM algorithm parameter estimation employ nonparametric bootstrap inference. apply our approach study show MDD less engaged compared healthy controls, engagement brain activities negative circuitry during an emotional conflict task.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....