Augmenting Social Reality for Inclusive and Situated Learning on May 7

May 3, 2018

Zhen Bai, currently a post-doctoral fellow of the Language Technology Institute at the School of Computer Science, Carnegie Mellon University, will speak on "Augmenting Social Reality for Inclusive and Situated Learning" on May 7 at 11am in Dell 1, room 105. Dr. Bai is a candidate for the Simulation Engineering and Precision Learning Cluster Search involving SEAS and Curry.

Zhen Bai is a post-doctoral fellow of the Language Technology Institute at the School of Computer Science, Carnegie Mellon University. She received her Ph.D. degree from the Graphics & Interaction Group at the Computer Laboratory, University of Cambridge in 2015. She is a Semifinalist for a 2018 NAEd/Spencer Postdoctoral Fellowship. Her background combines augmented reality, human-robot interaction, educational technologies, playful interfaces, and design for diversity. Her research draws from multiple disciplines including human-computer interaction, natural language processing, data science, game design, developmental psychology, and learning science to design interactive and intelligent interfaces that support lifelong learning, by eliminating cognitive, socio-emotional and cultural barriers among people with diverse abilities and backgrounds.

Abstract: Learning is a social endeavor. Social interaction functions as a special learning resource for motivation, knowledge awareness, critical thinking, and conflict resolution. It is, however, less accessible for people with underdeveloped social competencies such as students with autism, and may not always provide positive support for students from underrepresented groups such as ethnic minorities and low socioeconomic status. Supporting learning situated in the immediate physical and social environments remains challenging as an individual’s thoughts and feelings are hidden from others, tightly mapped with the physical space, and subject to change in response to the spontaneous and complex flow of social interaction.

In this talk, I will describe my research on the design and development of augmented, embodied and socially-aware interfaces and technologies that augment cognition and social interaction situated in the immediate social reality. I will focus on two projects that foster social skills and STEM learning for children with diverse abilities and backgrounds. The first enhances symbolic transformation through augmented reality technologies to help develop “theory of mind” for young children with and without autism. The second, “Sensing Curiosity in Play and Responding”, uses theory and data-driven approaches to elaborate fine-grained peer-peer interaction dynamics that lead to positive curiosity change, and to design a peer-like embodied conversational agent to foster curiosity in small-group STEM learning. Through these projects, I will reflect on the interdisciplinary opportunities and future directions of designing accessible and supportive social reality for people to approach complex problems and ever-changing environments.