Expected reward value and reward prediction errors reinforce but also interfere with human time perception

Emily K. DiMarco, Ashley Ratcliffe Shipp,Kenneth T. Kishida

biorxiv(2024)

引用 0|浏览0
暂无评分
摘要
Time perception is often investigated in animal models and in humans using instrumental paradigms where reinforcement learning (RL) and associated dopaminergic processes have modulatory effects. For example, interval timing, which includes the judgment of relatively short intervals of time (ranging from milliseconds to minutes), has been shown to be modulated by manipulations of striatal dopamine. The ‘expected value of reward’ (EV) and ‘reward prediction errors’ (RPEs) are key variables described in RL-theory that explain dopaminergic signals during reward processing during instrumental learning. Notably, the underlying connection between RL-processes and time perception in humans is relatively underexplored. Herein, we investigated the impact of EV and RPEs on interval timing in humans. We tested the hypotheses that EV and RPEs modulate the experience of short time intervals. We demonstrate that expectations of monetary gains or losses increases the initial performance error for 1000ms intervals. Temporal learning over repeated trials is observed with accelerated learning of non-reinforced 1000ms intervals; however, RPEs – specifically about rewards and not punishments – appear to reinforce performance errors, which effectively interferes with the rate at which (reinforced) 1000ms intervals were learned. These effects were not significant for 3000ms and 5000ms intervals. Our results demonstrate that EV and RPEs influence human behavior about 1000ms time intervals. We discuss our results considering model-free ‘temporal difference RL-theory’, which suggests the hypothesis that interval timing may be mediated by dopaminergic signals that reinforce the learning and prediction of dynamic state-transitions which could be encoded without an explicit reference to ‘time’ intervals. ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要