Keynote Speakers

Justin T. Baker, MD, PhD

McLean Hospital, Harvard Medical School

Sensing Psychosis: Deep Phenotyping of Neuropsychiatric Disorders

Traditional psychiatric care relies on subjective assessments, hindering progress in personalized treatments. However, pervasive computing offers unprecedented opportunities to develop dynamic models of mental illness by quantifying individual behavior over time and applying latent construct models. By transcending the precision-personalization dichotomy, we can revolutionize therapeutic discovery through unobtrusive, quantitative behavioral phenotyping. This presentation explores the integration of affective computing in severe mental illnesses such as depression, bipolar disorder, and schizophrenia. Affective computing enhances our understanding of illness fluctuations, contextual factors, and treatment interventions, enabling the identification of causal relationships and targeted interventions for specific neural circuits. By employing single-case experimental designs, we demonstrate the potential of affective computing to reshape psychiatric research and clinical practice. This technological integration paves the way for a closed-loop, personalized approach that optimizes care for individuals seeking treatment.

Bio

Justin T. Baker, MD, PhD, is the scientific director of the McLean Institute for Technology in Psychiatry (ITP) and director of the Laboratory for Functional Neuroimaging and Bioinformatics at McLean Hospital. He is also an assistant professor of psychiatry at Harvard Medical School.

Dr. Baker’s research uses both large-scale studies and deep, multilevel phenotyping approaches to understand the nature and underlying biology of mental illnesses. He is a clinical psychiatrist with expertise in schizophrenia and bipolar spectrum disorders and other disorders of emerging adulthood. In 2016, Dr. Baker co-founded the ITP, a first-of-its-kind research and development center to foster tool development and novel applications of consumer technology in psychiatric research and care delivery.

Sun Joo (Grace) Ahn, PhD

University of Georgia

Multimodal Extensions of Realities

With the rapid rise of multimodal platforms, such as virtual reality (VR), augmented reality (AR), and mixed reality (MR), emerging science indicates that human experiences in immersive virtual environments are shaped by the dynamic interactions between user characteristics, media features, and situational contexts. How do virtual experiences influence human minds and how do they transfer into the physical world to continue impacting thoughts, feelings, and behaviors?  This talk will cover findings from nearly two decades of lab and field research on multimodal extensions of our realities, with an emphasis on the media psychological mechanisms and outcomes of user experiences.

Bio

Sun Joo (Grace) Ahn (Ph.D., Stanford University) is a Professor of Advertising at the Grady College of Journalism and Mass Communication, University of Georgia. She is the founding director of the Center for Advanced Computer-Human Ecosystems (CACHE; https://www.ugavr.com) and the co-editor-in-chief of Media Psychology. Her main program of research investigates how immersive technologies such as virtual and augmented reality transform traditional rules of communication and social interactions, looking at how virtual experiences shape the way that people think, feel, and behave in the physical world. Her work is funded by the National Science Foundation, National Institutes of Health, National Oceanic and Atmospheric Administration, and the Environmental Protection Agency, and published in numerous top-tier outlets in the fields of communication, health, and engineering. 

Louis-Philippe Morency, PhD

Carnegie Mellon University

What is Multimodal?

Our experience of the world is multimodal – we see objects, hear sounds, feel texture, smell odors, and taste flavors. In recent years, a broad and impactful body of research emerged in artificial intelligence under the umbrella of multimodal, characterized by multiple modalities. As we formalize a long-term research vision for multimodal research, it is important to reflect on its foundational principles and core technical challenges. What is multimodal? Answering this question is complicated by the multi-disciplinary nature of the problem, spread across many domains and research fields. Two key principles have driven many multimodal innovations: heterogeneity and interconnections from multiple modalities. Historical and recent progress will be synthesized in a research-oriented taxonomy, centered around 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification. The talk will conclude with open questions and unsolved challenges essential for a long-term research vision in multimodal research.

Bio

Louis-Philippe Morency is Associate Professor in the Language Technology Institute at Carnegie Mellon University where he leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). He was formerly research faculty in the Computer Sciences Department at University of Southern California and received his Ph.D. degree from MIT Computer Science and Artificial Intelligence Laboratory. His research focuses on building the computational foundations to enable computers with the abilities to analyze, recognize and predict subtle human communicative behaviors during social interactions. He received diverse awards including AI’s 10 to Watch by IEEE Intelligent Systems, NetExplo Award in partnership with UNESCO and 10 best paper awards at IEEE and ACM conferences. His research was covered by media outlets such as Wall Street Journal, The Economist and NPR.