|11:00-11:05||Welcome and Opening: Jack Fletcher (Dept. Chair, UH Psychology)|
|11:05-11:50||Talk 1: Cameron Buckner (UH Philosophy)|
|12:10-12:55||Talk 2: Yukie Nagai (Osaka University)|
|13:00-13:45||Talk 3: Aaron Becker (UH Electrical & Computer Engineering)|
|13:45-14:00||Break and Social|
|14:00-14:45||Talk 4: Lars Schillingmann (Osaka University)|
|14:50-15:50||Talk 5: Minoru Asada (Osaka University)|
|15:50-16:00||Closing Remarks: Haluk Ogmen (UH Electrical & Computer Engineering)|
|16:00-16:30||Refreshments and Networking|
Title: How infant-caregiver interactions affect the early development of vocalization
Language communication is highly characteristic of the human species. Particularly, vocal communication is a unique means to bilaterally exchange messages in real-time. The developmental origin of such communication seems to be vocal interactions between an infant and a caregiver, and one of the big mysteries is how the infant learns to vocalize the mother tongue of the caregiver. Many theories claim to explain the infant’s capability of imitation based on acoustic matching. However, acoustic qualities of the infant and the caregiver are quite different and therefore cannot fully explain imitation. Instead, the interaction itself may have an important role, but the mechanism is still unclear. In this article, we review studies addressing this problem using constructive approaches based on cognitive developmental robotics. First, we review the early development of infant speech perception and articulation from observational studies in developmental psychology and neuroscientific imaging studies. Next, computational modeling approaches are explained. Then, constructive approaches with real robot experiments and computer simulations are introduced to discuss how infant-caregiver interactions affect the early development of vocalization. Finally, future issues pertaining to the development of language communication are discussed.
Title: Predictive Learning as a Key for Cognitive Development: New Insights from Developmental Robotics
Human infants acquire various cognitive abilities such as self/other cognition, imitation, and cooperation in the first few years of life. Although developmental studies have revealed behavioral changes in infants, underlying mechanisms for the development are not fully understood yet. We hypothesize that predictive learning of sensorimotor information plays a key role in infant development. Predictive learning is defined as a process to minimize the prediction error between an actual sensory feedback and a predicted one. For example, minimizing the prediction error enables infants to discriminate the self from others because the self’s body is controllable and thus recognized as a perfectly predictable entity. Social behaviors such as imitation and cooperation also emerge through predictive learning. A failure in others’ action, for example, induces a larger prediction error and thus triggers the execution of infants’ action to reduce the error, which results in a cooperative behavior. My talk will present our robotics studies investigating how predictive learning reproduces infant cognitive development. Furthermore, a potential of our hypothesis to explain the mechanism of autism spectrum disorders (ASD) will be explained. Our research supports a recent hypothesis that ASD is characterized by a difficulty in learning of sensorimotor prediction rather than in social interaction.
Title: Gaze is not enough: Computational Analysis of Infant’s Head Movement Measures the Developing Response to Social Interaction
Infant eye gaze is frequently studied because of its relevance as an indicator of attention. However, eye gaze is coupled with head motion. In this talk we analyze how head motion develops in different interaction contexts. For this purpose we developed an approach that can estimate infant head motion from ego perspective recordings as they are typically provided by eye tracking systems. Our method is able to quantify infant head motion from existing interaction recordings even if the head was not explicitly tracked. Therefore, data from longitudinal studies that has been collected over years can be reanalyzed in more detail. We applied our method to an existing longitudinal study of parent infant interaction and found that infants’ head motion in response to social interaction shows a developmental trend. Furthermore, our results indicate that this trend is not visible within gaze data alone. This suggests that head motion is an important element for understanding and measuring infants’ social behavior in interaction.
Title: Putting representations to work in Theory of Mind (ToM) research
Philosophers and cognitive scientists have worried that research on animal mind-reading faces a ‘logical problem’: the difficulty of experimentally determining whether animals and infants represent mental states (e.g. seeing) or merely the observable evidence (e.g. line-of-gaze) for those mental states. These skeptics argue that there is no "unique causal work" that a representation of a mental state could provide over and above the perceptual evidence for those mental states. I here argue that the problem here is in the first case semantic rather than methodological. While everyone agrees that ToM crucially involves the representation of the mental states of others, different researchers presume different implicit criteria for representation, and so disagree about what data would count as evidence for ToM. This impasse cannot be overcome merely by running more or better experiments; future debate on social cognition should either abandon the representational idiom or directly confront underlying disagreements about the nature of representation. I will propose a more ecumenical "forward-looking" approach to representation in theory of mind research, illustrating its empirical utility by reviewing a range of new experimental designs it suggests, including an ongoing empirical project on ravens.
Title: Human cognition with robotics swarm
What happens when one human tries to direct over a hundred robots simultaneously? Our lab has been studying how humans interact with swarms of robots through large-scale online experiments, and hands-on tests with hardware robots. Our results have been surprising to roboticists, and provide techniques for automation.
Bio: Aaron Becker's passion is robotics and control. Currently as an Assistant Professor in Electrical and Computer Engineering at the University of Houston, he is building the Robotic Swarm Control Lab to study how humans best interact with large numbers of robots.
Previously as a Research Fellow in a joint appointment with Boston Children's Hospital and Harvard Medical School, he implemented robotics powered and controlled by the magnetic field of an MRI, as a member of the Pediatric Cardiac Bioengineering Lab with Pierre Dupont. As a Postdoctoral Research Associate at Rice University in the Multi-Robot Systems Lab with James McLurkin, Aaron investigated control of distributed systems and nanorobotics with experts in the fields. His online game http://swarmcontrol.net seeks to understand the best ways to control a swarm of robots by a human. The project achieves this through a community of game-developed experts.
Aaron earned his PhD in Electrical & Computer Engineering at the University of Illinois at Urbana-Champaign, advised by Tim Bretl.
See more at https://www.youtube.com/user/aabecker5