Keynote Speakers

Professor Ursula Hess

Title: A Bidirectional Lens on Context and Emotional Expressions

Abstract: We almost never encounter facial expressions in isolation—but rather embedded in rich, dynamic contexts. Recent research on human interaction has shifted from the traditional view of expressions as stand-alone signals to the claim that context is the primary driver of emotional meaning. From this perspective facial expressions are inherently ambiguous cues whose interpretation hinges entirely on the surrounding situation.

But this one-way view misses a critical point: both context and expression provide information. The question is how this information is integrated. I propose a bidirectional perspective: just as context influences the interpretation of facial expressions, these expressions have sufficient intrinsic meaning to conversely influence the interpretation of the situation that elicited them. The real question is therefore not whether context or expression drives emotion understanding, but when and how each source of information becomes more informative.

Bio: Ursula Hess is Professor of Psychology at Humboldt-University of Berlin. Her research focuses on human emotion communication. Her main interests are processes related to nonverbal synchronisation (mimicry and contagion) and the role of emotion expressions for impression formation. She has over 200 scholarly publications, including six edited books. She is a former president of the Society for Research on Emotion and the Society for Psychophysiological Research.

Dom_Dwyer_Melbourne
A/Prof Dom Dwyer

Title: Recognise, Interpret, Simulate… Now What? Translating AI to Make Clinical Impact

Abstract: Advances in AI are rapidly transforming how we interact with emotional and behavioural data—but their impact in frontline mental health care remains limited. This keynote explores how research in affective AI and related fields can translate into real-world value, using youth mental health services as a test case. Drawing from over 12-years of a mission to translate AI to the clinic, I describe the road towards implementation in three countries. I will also share our team’s recent work building decision support systems that leverage natural language, speech, and clinical history to support shared decision-making in general practice and early intervention settings. The importance of translational infrastructure to bridge the translational chasm will be outlined in the context of a new $3M Medical Research Future Fund (MRFF) initiative to provide researchers with a National Critical Research Infrastructure to translate their AI models into medical devices. Within this scope, I’ll discuss key challenges—including bridging the gap between software development and production, user experience and design (UX), data governance, intellectual property, and regulatory uncertainty. To end the talk, I will discuss strategies for ensuring socially responsible deployment: from participatory design with young people to hybrid funding models that avoid exploitation. For the affective computing community, this talk offers both an invitation and a provocation: how do we move from detecting emotion to embedding emotional intelligence into the messy, high-stakes reality of care?

Bio: My vision is of a world where serious mental illness is preventable, care is proactive, and everyone has access to life-changing support. For over 10-years, I’ve worked to transform mental healthcare by harnessing AI—not as an end in itself, but as a way to make care more personal and create lasting change. I pioneered AI research in London and Munich for 7-years before returning to Orygen to accelerate the mission within our globally leading ecosystem. I now lead the MRFF National Critical Research Infrastructure for AI in Mental Health, which is a $3M project aiming to provide consultancy services and software for researchers to responsibly translate AI algorithms into clinical care. I also lead initiatives to create the next generation of AI algorithms as an NHMRC Principal Research Fellow (EL2) and Chief Investigator on over $30M of associated projects. My vision is supported by a resilient organisational structure where I am pioneering for-purpose social enterprise strategies.

Flora_Salim
Prof Flora Salim

Title: Modelling and Simulating Cyber-Physical-Social Behaviours with Multimodal Data

Abstract: Understanding and anticipating complex dynamic behaviour is fundamental to both computational social science and the scientific modelling of socio-technical systems. Behaviours of human and systems in the wild could unfold dynamically —often shaped by diverse contexts and evolving intentions.
Yet data capturing real-world behaviours are inherently noisy, context-dependent, and often only partially observed. This talk synthesises recent progress in understanding behaviour at scale through data-driven modelling and simulation, highlighting the convergence of data-efficient learning, generative models, and agentic AI for complex systems analysis. Recent advances reveal how latent routines, dynamics, and behavioural patterns can be learned without explicit ground-truth supervision. We will also demonstrate the use of LLMs for synthetic data generation. These approaches reflect a shift toward data-efficient, transferable, and context-sensitive models that are aimed at generalisation beyond limited user data and narrow domains. We also discuss the rise of agentic AI for enabling automated tooling and simulation. We will present our new cyber-physical-social simulation generation framework, enabling automated scenario generation, behaviour testing, and what-if analysis. This framework opens new possibilities for integrating empirical data with simulated environments.

Bio: Flora Salim a full Professor in the School of Computer Science and Engineering at the University of New South Wales (UNSW) Sydney, where she also serves as the Deputy Director (Engagement) of the UNSW AI Institute. Her work focuses on multimodal machine learning and foundation models for time-series and spatio-temporal data, behavioural modelling with multimodal sensors and wearables, robust and trustworthy machine learning, and on applications of AI and LLMs for smart and sustainable cities, and for mobility, transport, energy, and grid systems. She has received multiple nationally and internationally competitive fellowships, such as Humboldt Fellowship, Bayer Fellowship, Victoria Fellowship, ARC Australian Postdoctoral Industry (APDI) Fellowship, and many accolades and awards such as the Women in AI Award Australia and New Zealand (2022) and IBM Smarter Planet Industry Innovation Award. She is a member of the Australian Academy of Sciences’ National Committee for Information and Computing Sciences and an elect member of the Australian Research Council (ARC) College of Experts. She is a Vice Chair of the IEEE Task Force on AI for Time-Series and Spatio-Temporal Data. She serves in the editorial board of ACM TIST, ACM TSAS, PACM IMWUT, IEEE Pervasive Computing, and Nature Scientific Data, and has served as a senior reviewer or area chair for NeurIPS, ICLR, WWW, and many other top-tier conferences in AI and ubiquitous computing. Prof Salim is a Chief Investigator on the Australian Research Council (ARC) Centre of Excellence for Automated Decision Making and Society (ADM+S), co-leading the Mobilities Focus Area. She is also a Key Chief Investigator in the ARC Training Centre for Whole Life Design for Carbon Neutral Infrastructure, leading the Program on Machine Learning for Carbon Performance. She has worked with many industry and government partners, and managed large-scale research and innovation projects, leading to several patents and deployed systems locally and globally.

Yaser Photo
Prof Yaser Sheikh

Title: Photorealistic Telepresence

Abstract:  Telepresence has the potential to bring billions of people into artificial reality (AR/MR/VR). It is the next step in the evolution of telecommunication, from telegraphy to telephony to videoconferencing. In this talk, I will describe early steps taken at Meta Reality Pittsburgh towards achieving photorealistic telepresence: realtime social interactions in AR/VR with avatars that look like you, move like you, and sound like you. If successful, photorealistic telepresence will introduce pressure for the concurrent development of the next generation of algorithms and computing platforms for computer vision and computer graphics. In particular, I will introduce codec avatars: the use of neural networks to unify the computer vision (inference) and computer graphics (rendering) problems in signal transmission and reception. The creation of codec avatars require capture systems of unprecedented 3D sensing resolution, which I will also describe.

Bio:  Yaser Sheikh is the Vice President and founding director of the Meta Reality Lab in Pittsburgh, devoted to achieving photorealistic social interactions in augmented and virtual reality. He is a consulting professor at the Robotics Institute, Carnegie Mellon University, where he directed the Perceptual Computing Lab producing OpenPose and the Panoptic Studio. His research broadly focuses on machine perception and rendering of social behavior, spanning subdisciplines in computer vision, computer graphics, and machine learning. He has served as an associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) and has regularly served as a senior program committee member for SIGGRAPH, CVPR, and ICCV. His research has been featured by various news and media outlets including The New York Times, BBC, CBS, WIRED, and The Verge. With colleagues and students, he has won the Hillman Fellowship (2004), Honda Initiation Award (2010), Popular Science’s “Best of What’s New” Award (2014), as well as several conference best paper and demo awards (CVPR, ECCV, WACV, ICML).