{"id":683,"date":"2025-04-13T10:01:42","date_gmt":"2025-04-13T00:01:42","guid":{"rendered":"https:\/\/acii-conf.net\/2025\/?page_id=683"},"modified":"2025-08-12T14:44:34","modified_gmt":"2025-08-12T04:44:34","slug":"keynote-speakers","status":"publish","type":"page","link":"https:\/\/acii-conf.net\/2025\/keynote-speakers\/","title":{"rendered":"Keynote Speakers"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-page\" data-elementor-id=\"683\" class=\"elementor elementor-683\">\n\t\t\t\t<div class=\"elementor-element elementor-element-bd47cdd e-flex e-con-boxed e-con e-parent\" data-id=\"bd47cdd\" data-element_type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-adff32b elementor-widget__width-initial elementor-widget elementor-widget-image\" data-id=\"adff32b\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/www.ursulakhess.com\/\">\n\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"217\" height=\"300\" src=\"https:\/\/acii-conf.net\/2025\/wp-content\/uploads\/Ursulahess-217x300.jpg\" class=\"attachment-medium size-medium wp-image-881\" alt=\"\" srcset=\"https:\/\/acii-conf.net\/2025\/wp-content\/uploads\/Ursulahess-217x300.jpg 217w, https:\/\/acii-conf.net\/2025\/wp-content\/uploads\/Ursulahess.jpg 462w\" sizes=\"(max-width: 217px) 100vw, 217px\" \/>\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Professor Ursula Hess<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-cdb8caa e-con-full e-flex e-con e-child\" data-id=\"cdb8caa\" data-element_type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-3d7dd55 elementor-widget__width-initial elementor-widget elementor-widget-text-editor\" data-id=\"3d7dd55\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Title: A Bidirectional Lens on Context and Emotional Expressions<\/strong><\/p><p><strong>Abstract<\/strong>: We almost never encounter facial expressions in isolation\u2014but rather embedded in rich, dynamic contexts. Recent research on human interaction has shifted from the traditional view of expressions as stand-alone signals to the claim that context is the primary driver of emotional meaning. From this perspective facial expressions are inherently ambiguous cues whose interpretation hinges entirely on the surrounding situation.<\/p><p>But this one-way view misses a critical point: both context and expression provide information. The question is how this information is integrated. I propose a bidirectional perspective: just as context influences the interpretation of facial expressions, these expressions have sufficient intrinsic meaning to conversely influence the interpretation of the situation that elicited them. The real question is therefore not whether context or expression drives emotion understanding, but when and how each source of information becomes more informative.<\/p><p><strong>Bio<\/strong>: Ursula Hess is Professor of Psychology at Humboldt-University of Berlin. Her research focuses on human emotion communication. Her main interests are processes related to nonverbal synchronisation (mimicry and contagion) and the role of emotion expressions for impression formation. She has over 200 scholarly publications, including six edited books. She is a former president of the Society for Research on Emotion and the Society for Psychophysiological Research.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-356a157 e-flex e-con-boxed e-con e-parent\" data-id=\"356a157\" data-element_type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-fd6b2b0 elementor-widget__width-initial elementor-widget elementor-widget-image\" data-id=\"fd6b2b0\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/findanexpert.unimelb.edu.au\/profile\/148700-dom-dwyer\">\n\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/acii-conf.net\/2025\/wp-content\/uploads\/elementor\/thumbs\/Dom_Dwyer_Melbourne-r7gp69euibfi1yf8j08x6ornpo19bm67lcq8vh4jps.jpg\" title=\"Dom_Dwyer_Melbourne\" alt=\"Dom_Dwyer_Melbourne\" loading=\"lazy\" \/>\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">A\/Prof Dom Dwyer<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-0dc65d8 e-con-full e-flex e-con e-child\" data-id=\"0dc65d8\" data-element_type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-54c4094 elementor-widget__width-initial elementor-widget elementor-widget-text-editor\" data-id=\"54c4094\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Title: <span data-olk-copy-source=\"MessageBody\">Recognise, Interpret, Simulate\u2026 Now What? Translating AI to Make Clinical Impact<\/span><\/strong><\/p><p><strong>Abstract<\/strong>: <span data-olk-copy-source=\"MessageBody\">Advances in AI are rapidly transforming how we interact with emotional and behavioural data\u2014but their impact in frontline mental health care remains limited. This keynote explores how research in affective AI and related fields can translate into real-world value, using youth mental health services as a test case. Drawing from over 12-years of a mission to translate AI to the clinic, I describe the road towards implementation in three countries. I will also share our team\u2019s recent work building decision support systems that leverage natural language, speech, and clinical history to support shared decision-making in general practice and early intervention settings. The importance of translational infrastructure to bridge the translational chasm will be outlined in the context of a new $3M Medical Research Future Fund (MRFF) initiative to provide researchers with a National Critical Research Infrastructure to translate their AI models into medical devices. Within this scope, I\u2019ll discuss key challenges\u2014including bridging the gap between software development and production, user experience and design (UX), data governance, intellectual property, and regulatory uncertainty. To end the talk, I will discuss strategies for ensuring socially responsible deployment: from participatory design with young people to hybrid funding models that avoid exploitation. For the affective computing community, this talk offers both an invitation and a provocation: how do we move from detecting emotion to embedding emotional intelligence into the messy, high-stakes reality of care?<\/span><\/p><p><strong>Bio<\/strong>: <span data-olk-copy-source=\"MessageBody\">My vision is of a world where serious mental illness is preventable, care is proactive, and everyone has access to life-changing support. For over 10-years, I\u2019ve worked to transform mental healthcare by harnessing AI\u2014not as an end in itself, but as a way to make care more personal and create lasting change.<\/span>\u00a0I pioneered AI research in London and Munich for 7-years before returning to Orygen to accelerate the mission within our globally leading ecosystem. I now lead the MRFF National Critical Research Infrastructure for AI in Mental Health, which is a $3M project aiming to provide consultancy services and software for researchers to responsibly translate AI algorithms into clinical care.\u00a0I also lead initiatives to create the next generation of AI algorithms as an NHMRC Principal Research Fellow (EL2) and Chief Investigator on over $30M of associated projects.\u00a0My\u00a0vision is supported by a resilient organisational structure where I am pioneering for-purpose social enterprise strategies.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-a671395 e-flex e-con-boxed e-con e-parent\" data-id=\"a671395\" data-element_type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-a7885e5 elementor-widget__width-initial elementor-widget elementor-widget-image\" data-id=\"a7885e5\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/www.unsw.edu.au\/staff\/flora-salim\">\n\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/acii-conf.net\/2025\/wp-content\/uploads\/elementor\/thumbs\/Flora_Salim-scaled-r7ramvxqhud8n9ost1p1muzulaw95dqeaa6qo4vzyc.jpeg\" title=\"Flora_Salim\" alt=\"Flora_Salim\" loading=\"lazy\" \/>\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Prof Flora Salim<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-a7738b4 e-con-full e-flex e-con e-child\" data-id=\"a7738b4\" data-element_type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-daa0cfb elementor-widget__width-initial elementor-widget elementor-widget-text-editor\" data-id=\"daa0cfb\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Title:&nbsp;<\/strong><b><span lang=\"EN-GB\" style=\"font-size:11.0pt;\nfont-family:&quot;Aptos&quot;,sans-serif;mso-fareast-font-family:Aptos;mso-fareast-theme-font:\nminor-latin;mso-bidi-font-family:Aptos;color:#212121;mso-ansi-language:EN-GB;\nmso-fareast-language:EN-AU;mso-bidi-language:AR-SA\">Modelling and Simulating\nCyber-Physical-Social&nbsp;Behaviours with Multimodal Data<\/span><\/b><\/p>\n<p><strong>Abstract<\/strong>: Understanding and anticipating complex dynamic behaviour is fundamental to both computational social science and the scientific modelling of socio-technical systems. Behaviours of human and systems in the wild could unfold dynamically \u2014often shaped by diverse contexts and evolving intentions.<br>\nYet data capturing real-world behaviours are inherently noisy, context-dependent, and often only partially observed. This talk synthesises recent progress in understanding behaviour at scale through data-driven modelling and simulation, highlighting the convergence of data-efficient learning, generative models, and agentic AI for complex systems analysis. Recent advances reveal how latent routines, dynamics, and behavioural patterns can be learned without explicit ground-truth supervision. We will also demonstrate the use of LLMs for synthetic data generation. These approaches reflect a shift toward data-efficient, transferable, and context-sensitive models that are aimed at generalisation beyond limited user data and narrow domains. We also discuss the rise of agentic AI for enabling automated tooling and simulation. We will present our new cyber-physical-social simulation generation framework, enabling automated scenario generation, behaviour testing, and what-if analysis. This framework opens new possibilities for integrating empirical data with simulated environments.  <\/p>\n<p><strong>Bio<\/strong>: <span data-olk-copy-source=\"MessageBody\">Flora Salim a full Professor in the School of Computer Science and Engineering at the University of New South Wales (UNSW) Sydney, where she also serves as the Deputy Director (Engagement) of the UNSW AI Institute. Her work focuses on multimodal machine learning and foundation models for time-series and spatio-temporal data, behavioural modelling with multimodal sensors and wearables, robust and trustworthy machine learning, and on applications of AI and LLMs for smart and sustainable cities, and for mobility, transport, energy, and grid systems. She has received multiple nationally and internationally competitive fellowships, such as Humboldt Fellowship, Bayer Fellowship, Victoria Fellowship, ARC Australian Postdoctoral Industry (APDI)&nbsp;Fellowship, and many accolades and awards such as the Women in AI Award Australia and New Zealand (2022) and IBM Smarter Planet Industry Innovation Award.&nbsp;She is a member of the Australian Academy of Sciences\u2019 National Committee for Information and Computing Sciences and an elect member of the Australian Research Council (ARC) College of Experts. She is a Vice Chair of the IEEE Task Force on AI for Time-Series and Spatio-Temporal Data. She serves in the editorial board of ACM TIST, ACM TSAS, PACM IMWUT, IEEE Pervasive Computing, and Nature Scientific Data, and has served as a senior reviewer or area chair for NeurIPS, ICLR, WWW, and many other top-tier conferences in AI and ubiquitous computing.&nbsp;Prof Salim is a Chief Investigator on the Australian Research Council (ARC) Centre of Excellence for Automated Decision Making and Society (ADM+S), co-leading the Mobilities Focus Area. She is also a Key Chief Investigator in the ARC Training Centre for Whole Life Design for Carbon Neutral Infrastructure, leading the Program on Machine Learning for Carbon Performance. She has worked with many industry and government partners, and managed large-scale research and innovation projects, leading to several patents and deployed systems locally and globally.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-a759c5e e-flex e-con-boxed e-con e-parent\" data-id=\"a759c5e\" data-element_type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-bf922da elementor-widget__width-initial elementor-widget elementor-widget-image\" data-id=\"bf922da\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/www.cs.cmu.edu\/~yaser\/\">\n\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/acii-conf.net\/2025\/wp-content\/uploads\/elementor\/thumbs\/Yaser-Photo-r8jm9zt3b90myrz54bcxffpq9h46la0vcr0xpqvbyc.jpg\" title=\"Yaser Photo\" alt=\"Yaser Photo\" loading=\"lazy\" \/>\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Prof Yaser Sheikh<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-6b33b6e e-con-full e-flex e-con e-child\" data-id=\"6b33b6e\" data-element_type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-cb9ea23 elementor-widget__width-initial elementor-widget elementor-widget-text-editor\" data-id=\"cb9ea23\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Title: <span class=\"fontstyle0\">Photorealistic Telepresence<\/span> <br \/><\/strong><\/p><p><strong>Abstract<\/strong>: \u00a0<span class=\"fontstyle0\">Telepresence has the potential to bring billions of people into artificial reality (AR\/MR\/VR). It is the next step in the evolution of telecommunication, from telegraphy to telephony to videoconferencing. In this talk, I will describe early steps taken at Meta Reality Pittsburgh towards achieving photorealistic telepresence: realtime social interactions in AR\/VR with avatars that look like you, move like you, and sound like you. If successful, photorealistic telepresence will introduce pressure for the concurrent development of the next generation of algorithms and computing platforms for computer vision and computer graphics. In particular, I will introduce <\/span><span class=\"fontstyle0\">codec avatars<\/span><span class=\"fontstyle0\">: the use of neural networks to unify the computer vision (inference) and computer graphics (rendering) problems in signal transmission and reception. The creation of codec avatars require capture systems of unprecedented 3D sensing resolution, which I will also describe.<\/span> <\/p><p><strong>Bio<\/strong>: \u00a0<span class=\"fontstyle0\">Yaser Sheikh is the Vice President and founding director of the Meta Reality Lab in Pittsburgh, devoted to achieving photorealistic social interactions in augmented and virtual reality. He is a consulting professor at the Robotics Institute, Carnegie Mellon University, where he directed the Perceptual Computing Lab producing <\/span><span class=\"fontstyle0\">OpenPose <\/span><span class=\"fontstyle0\">and the <\/span><span class=\"fontstyle0\">Panoptic Studio<\/span><span class=\"fontstyle0\">. His research broadly focuses on machine perception and rendering of social behavior, spanning subdisciplines in computer vision, computer graphics, and machine learning. He has served as an associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) and has regularly served as a senior program committee member for SIGGRAPH, CVPR, and ICCV. His research has been featured by various news and media outlets including The New York Times, BBC, CBS, WIRED, and The Verge. With colleagues and students, he has won the Hillman Fellowship (2004), Honda Initiation Award (2010), Popular Science\u2019s &#8220;Best of What\u2019s New&#8221; Award (2014), as well as several conference best paper and demo awards (CVPR, ECCV, WACV, ICML).<\/span> <\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Professor Ursula Hess Title: A Bidirectional Lens on Context and Emotional Expressions Abstract: We almost never encounter facial expressions in isolation\u2014but rather embedded in rich, dynamic contexts. Recent research on human interaction has shifted from the traditional view of expressions as stand-alone signals to the claim that context is the primary driver of emotional meaning. From this perspective facial expressions [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"om_disable_all_campaigns":false,"footnotes":""},"class_list":["post-683","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/acii-conf.net\/2025\/wp-json\/wp\/v2\/pages\/683","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/acii-conf.net\/2025\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/acii-conf.net\/2025\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/acii-conf.net\/2025\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/acii-conf.net\/2025\/wp-json\/wp\/v2\/comments?post=683"}],"version-history":[{"count":123,"href":"https:\/\/acii-conf.net\/2025\/wp-json\/wp\/v2\/pages\/683\/revisions"}],"predecessor-version":[{"id":1566,"href":"https:\/\/acii-conf.net\/2025\/wp-json\/wp\/v2\/pages\/683\/revisions\/1566"}],"wp:attachment":[{"href":"https:\/\/acii-conf.net\/2025\/wp-json\/wp\/v2\/media?parent=683"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}