Looking is not enough: Multimodal attention supports the real-time learning of new words.

Developmental science(2023)

引用 6|浏览29
暂无评分
摘要
Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real-time behaviors required for learning new words during free-flowing toy play, we measured infants' visual attention and manual actions on to-be-learned toys. Parents and 12-to-26-month-old infants wore wireless head-mounted eye trackers, allowing them to move freely around a home-like lab environment. After the play session, infants were tested on their knowledge of object-label mappings. We found that how often parents named objects during play did not predict learning, but instead, it was infants' attention during and around a labeling utterance that predicted whether an object-label mapping was learned. More specifically, we found that infant visual attention alone did not predict word learning. Instead, coordinated, multimodal attention-when infants' hands and eyes were attending to the same object-predicted word learning. Our results implicate a causal pathway through which infants' bodily actions play a critical role in early word learning.
更多
查看译文
关键词
attention,eye tracking,multimodal behaviors,parent-infant interaction,word learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要