LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Looking is not enough: Multimodal attention supports the real-time learning of new words.

Photo by rossfindon from unsplash

Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is… Click to show full abstract

Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real-time behaviors required for learning new words during free-flowing toy play, we measured infants' visual attention and manual actions on to-be-learned toys. Parents and 12-to-26-month-old infants wore wireless head-mounted eye trackers, allowing them to move freely around a home-like lab environment. After the play session, infants were tested on their knowledge of object-label mappings. We found that how often parents named objects during play did not predict learning, but instead, it was infants' attention during and around a labeling utterance that predicted whether an object-label mapping was learned. More specifically, we found that infant visual attention alone did not predict word learning. Instead, coordinated, multimodal attention - when infants' hands and eyes were attending to the same object - predicted word learning. Our results implicate a causal pathway through which infants' bodily actions play a critical role in early word learning. Wireless head-mounted eye tracking was used to record gaze data from infants and parents during free-flowing play with unfamiliar objects in a home-like lab environment. Neither frequency of object labeling nor infant visual attention during and around labeling utterances predicted whether infants learned the object-label mappings. Infants' multimodal attention to objects around labeling utterances was the strongest predictor of real-time learning. Taking the infant's perspective to study word learning allowed us to find new evidence that suggests a causal pathway through which infants' bodies shape their learning input. This article is protected by copyright. All rights reserved.

Keywords: new words; real time; attention; learning new; multimodal attention

Journal Title: Developmental science
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.