Computational Modeling Of Behavioral Tasks: An Illustration On A Classic Reinforcement Learning Paradigm

QUANTITATIVE METHODS FOR PSYCHOLOGY(2021)

引用 1|浏览1
暂无评分
摘要
There has been a growing interest among psychologists, psychiatrists and neuroscientists in applying computational modeling to behavioral data to understand animal and human behavior. Such approaches can be daunting for those without experience. This paper presents a stepby-step tutorial to conduct parameter estimation in R via three techniques: Maximum Likelihood Estimation (MLE), Maximum A Posteriori (MAP) and Expectation-Maximization with Laplace approximation (EML). We 1rst demonstrate how to simulate a classic reinforcement learning paradigm the two-armed bandit task - for N = 100 subjects; and then explain how to develop the computational model and implement the MLE, MAP and EML methods to recover the parameters. By presenting a suWciently detailed walkthrough on a familiar behavioral task, we hope this tutorial could bene1t readers interested in applying parameter estimation methods in their own research.
更多
查看译文
关键词
Computational modeling, reinforcement learning, two-armed bandit, parameter esti-mation, maximum likelihood estimation, maximum a posteriori, expectation-maximization, Tools R
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要