Invited speakers
Speaker1: Prof. Louis-Philippe Morency – Carnegie Mellon University
Biography: Louis-Philippe Morency
Title: Multimodal AI: Understanding Human Behaviors
Abstract: Human face-to-face communication is a little like a dance, in that participants continuously adjust their behaviors based on verbal and nonverbal cues from the social context. Today's computers and interactive devices are still lacking many of these human-like abilities to hold fluid and natural interactions. Leveraging recent advances in machine learning, audio-visual signal processing and computational linguistic, my research focuses on creating computational technologies able to analyze, recognize and predict human subtle communicative behaviors in social context. Central to this research effort is the introduction of new probabilistic models able to learn the temporal and fine-grained latent dependencies across behaviors, modalities and interlocutors. In this talk, I will present some of our recent achievements modeling multiple aspects of human communication dynamics, motivated by applications in healthcare (depression, PTSD, suicide, autism), education (learning analytics), business (negotiation, interpersonal skills) and social multimedia (opinion mining, social influence).
SPEAKER’S BIO: Louis-Philippe Morency is Leonardo Associate Professor in the Language Technology Institute at Carnegie Mellon University where he leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). He was formerly research faculty in the Computer Sciences Department at University of Southern California and received his Ph.D. degree from MIT Computer Science and Artificial Intelligence Laboratory. His research focuses on building the computational foundations to enable computers with the abilities to analyze, recognize and predict subtle human communicative behaviors during social interactions. He is currently chair of the advisory committee for ACM International Conference on Multimodal Interaction and associate editor at IEEE Transactions on Affective Computing.
Speaker2: Prof. Aleix M Martinez – The Ohio State University
Biography: Aleix M Martinez
Title: The Face of Emotion: Production and visual perception of facial expressions of emotion
Abstract: We now have computer vision algorithms that can successfully segment regions of interest in an image, recognize objects and scenes, and even create accurate 3D models of them. But how about higher level, abstract concepts like the understanding of what other people do, what they are interested in, and how they feel? This talk will introduce the first algorithms to successfully solve some of these problems. I will first summarize our research uncovering the image features used by the human visual system to recognize emotion in others, which include facial muscle articulation, facial color modulations, body pose, and context. I will detail how these results can be used to define computer vision systems that can work “in the wild” (i.e., outside controlled, in-lab conditions). We will then discuss how these concepts can be used to design systems that can interpret the intent of others and how we can develop a computational theory of mind for AI systems.
Abstract: Aleix M. Martinez is a Professor in the Department of Electrical and Computer Engineering at The Ohio State University (OSU), where he is the founder and director of the Computational Biology and Cognitive Science Lab. He is also affiliated with the Department of Biomedical Engineering and the Center for Cognitive and Brain Sciences, where he is a member of the executive committee. He is the recipient of many award, including best papers awards at CVPR, ECCV, and a Google Faculty Research award. Aleix has served as an associate editor of several major journals devoted to vision and affect (PAMI, TAC, CVIU) and area chair for many top conferences (CVPR, ICCV). He was also a Program Chair for CVPR 2014. He also served as a member of NIH’s Cognition and Perception study section. More about him: http://www2.ece.ohio-state.edu/~aleix/