Inductive biases of neural specialization in spatial navigation

biorxiv(2023)

引用 0|浏览8
暂无评分
摘要
The brain may have evolved a modular architecture for reward-based learning in daily tasks, with circuits featuring functionally specialized modules that match the task structure. We propose that this architecture enables better learning and generalization than architectures with less specialized modules. To test this hypothesis, we trained reinforcement learning agents with various neural architectures on a naturalistic navigation task. We found that the architecture that largely segregates computations of state representation, value, and action into specialized modules enables more efficient learning and better generalization. Behaviors of agents with this architecture also resemble macaque behaviors more closely. Investigating the latent state computations in these agents, we discovered that the learned state representation combines prediction and observation, weighted by their relative uncertainty, akin to a Kalman filter. These results shed light on the possible rationale for the brain’s modular specializations and suggest that artificial systems can use this insight from neuroscience to improve learning and generalization in natural tasks. ### Competing Interest Statement X.P. is a founder of Upload AI, LLC, a company in which he has related financial interests.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要