Quick Links
- Sunday, 10th September 2023
- Monday, 11th September 2023
- Tuesday, 12th September 2023
- Wednesday, 13th September 2023
Sunday, 10th September 2023
Registration
8:00: Registration opens
Register in the Media Lab lobby, then head up to the 6th floor for pastries and coffee!
Sunday Doctoral Consortiums
13:00 – 14:00: Affective Systems for Supporting Quality of Life and Well-Being
Room: Multipurpose room (MR)
Chair: Theodora Chaspari, Oya Celiktutan
Format: 10 mins for presentation, 20 mins for questions at the end of all presentations
1. Exploring New Art Therapies by Integrating Drama Therapy and XR Technology, Yoriko Matsuda (Nara Institute of Science and Technology, Japan)
2. The Affective Impact of Electronic Travel Aids on the Sense of Independence and Quality of Life for the Visually Impaired Using Public Transit in the U.S., Benjamin Corriette (Howard University, USA)
3. Affective Computing for Managing Crisis Communication, Egle Klekere (University of Latvia, Latvia)
4. VR and Sensors in Alleviating Loneliness, Saskia Davies (Swansea University, UK)
14:00 – 15:00: Perception and generation of multimodal social signals for real world applications
Room: E14-514B
Chair: Theodora Chaspari, Oya Celiktutan
1. Towards Facial Expression Recognition in Immersive Virtual Reality with EmojiRain, Thorben Ortmann (Hamburg University of Applied Science, Germany)
2. Enhancing Speech Emotion Recognition for Real-World Applications via ASR Integration, Yuanchao Li (University of Edinburgh, UK)
3. Natural Language Generation for Socially Competent Task-Oriented Agents, Lorraine Vanel (LTCI, Telecom-Paris, Institut Polytechnique de Paris, France)
4. Using Computational Design to Enhance Emotion-Driven Recommendations in Multimedia Experiences, Albert Hosea Luganga (Technological University of the Shannon, Ireland)
15:00 – 16:00: Mentoring Session
Room: E14-514B
Format: Students will meet the DC mentors in a series of consecutive ~3min rounds.
16:00 – 17:00: Understanding Complex Affective Constructs
Room: E14-514B
Chair: Theodora Chaspari, Oya Celiktutan
1. Understanding the Rigidity of Beliefs in Temporal Social Networks, Adiba Proma (University of Rochester, USA)
2. Affective Computing for detecting psychological psychological Flow state: a definition and methodological problem, Elena Sajno (University of Pisa and HTLAB, Università Cattolica del Sacro Cuore, Italy)
3. Developing an Uncertainty-Aware Empathetic Conversational System, Samad Roohi (La Trobe University, Australia)
4. Expression and Perception of Stress Through the Lens of Multimodal Signals: A Case Study in Interpersonal Communication Settings, Ehsanul Haque Nirjhar (Texas A&M University, USA)
Sunday Tutorials
Sunday Tutorial 1
Introduction to eye and audio behaviour computing for affect analysis in wearable contexts
Tutor: Siyuan Chen, Ting Dang, Julien Epps
Room: E14-240
Time: 13:30 – 16:30 (Half day – Afternoon)
Tutorial Description: Multimodal processing for affect analysis is instrumental in enabling natural human-computer interaction, facilitating health and wellbeing, and enhancing overall quality of life. While different modalities, including facial expression, brain waves, speech, skin conductance and blood volume offer valuable insights, modalities like eye behavior and audio provide exceptionally rich information and can be easily and non-invasively collected in mobile contexts without physical movement restrictions. Such rich information is highly correlated with cognitive and affective states, and is reflected not only in conventional eye and speech behavior such as gaze, pupil size, blink, linguistics and paralinguistic, but also newly developed behavior descriptors such as eyelid movement, the interaction between the eyelid, iris and pupil, eye action units, heart and breathing sensing through in-ear microphones, abdominal sound sensing via custom belt-shaped wearables, and the sequence and coordination of multimodal behavior events. The high-dimensional nature of the available information makes eye and audio sensing ideal for multimodal affect analysis. However, fundamental and state-of-the-art eye and audio behavior computing has not been widely introduced to the audience in the form of tutorial. Meanwhile, advancements in wearables and head-mounted devices like Apple Vision Pro, smart glasses or VR make them the likely next generation of computing devices, providing novel opportunities to explore new types of eye behavior and new methods of body sound sensing for affect analysis and modelling. Therefore, this tutorial will focus on eye and audio modality computing, using an eye camera and a microphone as examples, and multimodal wearable computing approaches, using the modalities of the eye, speech and head movement as examples, aiming to propel the development of future multimodal affective computing systems in diverse domains.
About the presenters: Siyuan Chen is a lecturer at the University of New South Wales (UNSW). Her work focuses on using “big data” from close-up eye videos, speech and head movement to understand human internal state such as emotion, cognition and action. She received her PhD in Electrical Engineering from UNSW. Before joining UNSW, she worked as a Research Intern at NII, Tokyo, Japan., a Research Fellow in the Department of Computer Science and Information Systems at the University of Melbourne and a visiting researcher to the STARS team, INRIA, Sophia Antipolis, France. Dr. Siyuan Chen is a recipient of the NICTA Postgraduate and the top-up Project Scholarship, the Commercialization Training Scheme Scholarship, and the Australia Endeavor Fellowship 2015. She is a member of Woman in Signal Processing Committee.
Ting Dang is currently a Senior Research Scientist in Nokia Bell Labs, and a visiting researcher in the Department of Computer Science and Technology, University of Cambridge. Prior to this, she worked as a Senior Research Associate at the University of Cambridge. She received her Ph.D. from the University of New South Wales, Australia. Her primary research interests are on human centric sensing and machine learning for mobile health monitoring and delivery, specifically on exploring the potential of audio signals (e.g., speech, cough) via mobile and wearable sensing for automatic mental state (e.g., emotion, depression) prediction and disease (e.g., COVID-19) detection and monitoring. She was shortlisted and invited to attend Asian Dean’s Forum Rising Star 2022 and won the IEEE Early Career Writing Retreat Grant 2019 and ISCA Travel Grant 2017.
Julien Epps received the BE and PhD degrees from the University of New South Wales, Sydney, Australia, in 1997 and 2001, respectively. From 2002 to 2004, he was a Senior Research Engineer with Motorola Labs, where he was engaged in speech recognition. From 2004 to 2006, he was a Senior Researcher and Project Leader with National ICT Australia, Sydney, where he worked on multimodal interface design. He then joined the UNSW School of Electrical Engineering and Telecommunications, Australia, in 2007 as a Senior Lecturer, and is currently a Professor and Head of School. He is also a Co-Director of the NSW Smart Sensing Network, a Contributed Researcher with Data61, CSIRO, and a Scientific Advisor for Sonde Health (Boston, MA). He has authored or co-authored more than 270 publications and serves as an Associate Editor for the IEEE Transactions on Affective Computing. His current research interests include characterisation, modelling, and classification of mental state from behavioral signals, such as speech, eye activity, and head movement.
Sunday Tutorial 2
The potential impact of the AI Act on affective computing research and development
Tutors: Andreas Häuselmann, Deniz Iren, Bhoomika Agarwal
Room: E14-244
Time: 13:30 – 16:30 (Half day – Afternoon)
Tutorial Description: The European Union is currently negotiating the AI Act, a legislative initiative aimed at establishing a comprehensive and standardized framework for governing artificial intelligence. Proposed by the European Commission in April 2021, the latest amendment was made in June2023 by the European Parliament (‘AI Act proposal’). This proposal outlines a risk-based approach to classifying various AI practices into three categories: unacceptable-risk, high-risk, and low-risk. Activities falling under the unacceptable-risk category are strictly prohibited. For instance, such activities include the use of AI systems inferring emotions of natural persons in the context of law enforcement, border management, workplace and education, AI system deploying subliminal and manipulative or deceptive techniques, the exploitation of vulnerabilities among specific groups,, AI-driven social scoring systems, and remote biometric identification for law enforcement purposes.
The high-risk category encompasses systems and practices that have the potential to cause harm to individuals’ health, safety, or impact their fundamental rights. They are allowed, but are subject to severe compliance requirements.The AI Act proposal may have significant implications for the field of affective computing research and practice. Firstly, it establishes a definition of emotion recognition systems. The latter refers to “an AI system that aims to identify or infer emotions, thoughts, states of mind, or intentions of individuals or groups based on their biometric and biometric-based data”.
Secondly, the AI Act proposal emphasizes concerns and risks related to emotion recognition systems. It acknowledges that emotion expressions and perceptions can vary across cultures and contexts. Furthermore, the AI Act proposal mentions the following ‘shortcomings’: limited reliability, lack of specificity and limited generalisability. According to the AI Act proposal, these shortcomings could lead to major risks for abuse.
Lastly, the AI Act proposal contains transparency obligations. Providers and deployers of emotion recognition systems must comply with specific transparency obligations. Also, high risk systems are subject to a fundamental rights impact assessment.
Part 1: Presentation about the AI Act proposal discussing the most relevant provisions for AC community and highlighting potential impacts on affective computing research and practice (60-90 mins):
Part 2: Interactive session (120-150 mins).
Set up breakout groups. Elicit participants’ perceptions, concerns, and proposed mitigation strategies. Participants will match the risks identified by the AI Act proposal with the risks reported by the ACII community. For the latter, we will provide the participants with a thematic analysis of the ethical impact statements of 70 papers that are accepted to be presented at the ACII conference. During the group discussions, Andreas will be available to answer any questions and provide clarifications regarding the legal text. Deniz will be facilitating the discussions. Finally, groups will shortly present their findings and highlight potential mismatches between the risks identified by the AI Act proposal and the risks reported by the ACII community (based on ethical impact statements).
About the presenters: Deniz Iren is an assistant professor at the Department of Information Science at the Open University of the Netherlands. His research focuses on facial expression recognition, speech emotion recognition, ethics of affective computing, and addressing the alignment problem with affective computing.
Andreas Häuselmann is an external PhD candidate at eLaw, Center for Law and Digital Technologies at Leiden University. Next to his external PhD research, he is a Senior Legal Adviser for Privacy and Cybersecurity at the Dutch law firm De Brauw Blackstone Westbroek N.V. His research focuses on the legal aspects of AI, particularly EU privacy and data protection law.
Bhoomika Agarwal is a PhD candidate at the Educational Sciences Faculty of the Open University of the Netherlands. Her research focuses on creating an ethical framework for AI in education.
Sunday Tutorial 3
Measurement Validation in Affective Computing
Tutor: Jeffrey Girard
Room: E15-359
Time: 9:00 – 12:00 (Half day – Morning)
Tutorial Description: Does your label really measure what you think it does? How can you provide evidence of this to reviewers and readers of your research? In affective computing, researchers are often interested in studying and building models to predict psychological quantities that are difficult to measure. For example, there is no ruler or thermometer for measuring amusement, depression, or extraversion. These quantities must be measured indirectly, e.g., using self-report questionnaires, structured interviews, or observer rating scales. The process of evaluating the extent to which such measurements are consistent and trustworthy (e.g., measure what they purport to) is called “measurement validation.” This three-hour tutorial will teach attendees about the theory and practice of this critically important part of the research process.
Theoretical topics will include overviews of classical test theory, generalizability theory, and contemporary validity theory. Practical topics will include the estimation of external and criterion validity coefficients, inter-item reliability for self-report questionnaires (using Cronbach’s alpha and McDonald’s omega), and inter-rater reliability for structured interviews and observer rating scales (using generalized kappa coefficients and modern intraclass correlation coefficients).
We will discuss best practices for designing, conducting, and reporting a measurement validation study, drawing examples from the affective computing community. We will also discuss common challenges that come up in this process (e.g., imbalanced classes, low variance, ordered categories, and missing data) and how to address these challenges using recent advances in statistical methods (e.g., generalized coefficients, multilevel decomposition, and simulation-based methods).
About the presenter: Jeffrey Girard received his PhD in Clinical Psychology from the University of Pittsburgh and completed a postdoc at the Language Technologies Institute at Carnegie Mellon University. He is currently an Assistant Professor in the psychology department at the University of Kansas, where he directs the Brain, Behavior, and Quantitative Science program and co-directs the Kansas Data Science Consortium. He serves as an Associate Editor at the IEEE Transactions on Affective Computing journal and helped organize this year’s ACII conference as a member of the Program and LBR Committees.
Jeffrey’s research focuses on the recognition and interpretation of facial action units, interpersonal communication and functioning, and the structure of psychopathology. His methodological research focuses on measurement validation, Bayesian statistical modeling, and technology-assisted psychological assessment.
Sunday Workshops
Sunday Workshop 1
Affective Computing for Mental Wellbeing: Challenges, Opportunities, and Promising Synergies (mWELL)
Organizers: Iulia Lefter, David Luxton, Alice Baird, Theodora Chaspari, Zakia Hammal, Marwa Mahmoud, Albert Ali Salah
Room: Dreyfoos Lecture Hall
Time: 09:00 – 17:00 (Full day)
09:00 – 09:10: Welcome note
09:10 – 09:55: Keynote Akane Sano
09:55 – 10:15: Coffee Break
10:15 – 11:00: Keynote Nicholas Cummins
11:00 – 12:00: Short Talk 1
Format: 15 minutes total.
- Context-aware EEG-based perceived stress recognition based on emotion transition paradigm. Jiyao Liu, Lang He, Zhiwei Chen, Ziyi Chen, Yu Hao, and Dongmei Jiang.
- BERSting at the screams: Recognition of shouting and distress from mobile phone recordings. Paige Tuttosi and Angelica Lim.
- Investigating self-supervised learning for predicting stress and stressors from passive sensing. Harish Haresamudram, Jina Suh, Javier Hernandez, Jenna Butler, Ahad Chaudhry, Longqi Yang, Koustuv Saha, and Mary Czerwinski.
12:00 – 13:00: Lunch
13:00 – 13:45: Keynote John Torous
13:45 – 14:00: Short Talk 2
Format: 15 minutes total.
- You go first: The effects of self-disclosure reciprocity in human-chatbot interactions. Emmelyn Croes, Marjolijn Antheunis, and Linwei He.
- Mindfulness based stress reduction: A randomised trial of a virtual human, teletherapy, and a chatbot. Mariam Karhiy, Mark Sagar, Mike Antoni, Kate Loveys, and Elizabeth Broadbent.
- Investigating psychological and physiological effects of forest walking: A machine learning approach. Bhargavi Mahesh, Andreas Seiderer, Michael Dietz, Elisabeth Andre, Joachim Rathmann, Jonathan Simon, Christoph Beck, and Yekta Said Can.
- Social performance rating during social skills training in adults with autism spectrum disorder and schizophrenia. Kana Miyamoto, Hiroki Tanaka, Jennifer Hamet Bagnou, Elise Prigent, Celine Clavel, Jean-Claude Martin, and Satoshi Nakamura.
- Towards successful deployment of wellbeing sensing technologies: Identifying misalignments across contextual boundaries. Jina Suh, Javier Hernandez Rivera, Koustuv Saha, Kathy Dixon, Mehrab Bin Morshed, Esther Howe, Anna Kawakami, and Mary Czerwinski.
15:00 – 15:30: Coffee Break
15:30 – 15:15: Keynote David Luxton
16:15 – 17:00: Group discussion
Sunday Workshop 2
Second Workshop on Affective Human-Robot Interaction (AHRI 2023)
Organizers: Leimin Tian, Chuang Yu, Siyang Song, Zhao Han, Jingting Li, Meiying Qin, Xiaofeng Liu, Aiguo Song, Adriana Tapus, Angelo Cangelosi
Room: Bartos Auditorium
Time: 09:00 – 17.00 (Full day)
9:00 – 9:15: Opening remarks
9:15 – 10:00: Keynote 1 (Dr. Su-Jing Wang, remote)
Format: 30min presentation + 15min Q&A
10:00 – 10:30: Coffee break
10:30 – 11:15: Keynote 2 (Dr. Shaun Canavan)
Format: 30min presentation + 15min Q&A
11:15 – 12:00: Keynote 3 (Dr. Linlin Shen)
Format: 30min presentation + 15min Q&A
12:00 – 13:30: Lunch break
13:30 – 14:15: Oral session 1
Format: 7min presentation + 3min Q&A per paper
- Preference learning from emotional expressions contributes integrative solutions between human-AI negotiation. Motoaki Sato, Kazunori Terada and Jonathan Gratch.
- Harmony Index: A Scalar Index to Assess and Predict Effectiveness in Multi-Agent Teaming. Darryl Roman, Noah Ari and Johnathan Mell.
- The Role of Simulated Emotions in Reinforcement Learning: Insights from a Human-Robot Interaction Experiment. Floortje Lycklama à Nijeholt and Joost Broekens.
- A Framework for Automatic Personality Recognition in Dyadic Interactions. Euodia Dodd, Siyang Song and Hatice Gunes.
14:15 – 15:00: Keynote 4 (Dr. Nadia Berthouze)
Format: 30min presentation + 15min Q&A
15:00 – 15:30: Coffee break
15:30 – 16:00: Oral session 2
Format: 7min presentation + 3min Q&A per paper
- Learning to Prompt for Vision-Language Emotion Recognition. Hongxia Xie, Hua Chung, Hong-Han Shuai and Wen-Huang Cheng.
- T2GR2: Textile Touch Gesture Recognition with Graph Representation of EMG. Chuang Yu, Yifu Liu, Bruna Petreca, Sharon Baurley and Nadia Berthouze.
- Comparing an android head with its digital twin regarding the dynamic expression of emotions. Amelie Kassner and Christian Becker-Asano.
16:00 – 16:45: Keynote 5 (Dr. Huili Chen)
Format: 30min presentation + 15min Q&A
16:45 – 17:00: Closing remarks
Sunday Workshop 3
Workshop on Addressing Social Context in Affective Computing (ASOCA)
Organizers: Bernd Dudzik, Tiffany Matej Hrkalovic, Joost Broekens, Dirk Heylen, Zakia Hammal
Room: E14-341
Time: 09:00 – 16500 (Full day)
09:00-09:10: Welcome Note
09:10-09:55: Keynote 1: Daniel Balliet. Perceptions of (social) situations explain variation in behavior
Abstract — Humans must understand the context of their decisions and behavior to behave in ways that are best for themselves and others.
Recently, there has been an abundance of models about how humans
understand situations they experience in daily life and its importance for behavior. I will first briefly review a few of the models, and then centre my attention on a specific theory of how people understand social situations, that is Functional Interdependence Theory (FIT). FIT proposes that people experience a great variety of interdependent situations with others and that there could exist benefits from detecting the type of interdependence people experience in a situation and using that to condition behavioral strategies. Interdependence can vary along four dimensions, and I will discuss evidence that people can perceive situations (and relationships) along these dimensions. Across several studies, we have found that people can reliably differentiate situations along five dimensions (mutual dependence, corresponding-versus-conflicting outcomes (i.e., conflict),
asymmetry of dependence (i.e., Power), future interdependence, and
information certainty). Moreover, people can use these dimensions to
describe different relationships in their social network (e.g., acquaintances, friends, romantic partners, and family). Across several studies, we have observed how people perceive their social interactions and predict when they decide to cooperate. An ability to perceive differences in interdependent situations and relationships could enable people to make better partner choices, condition behavioral strategies, and adapt to a broad range of ecological conditions which vary according to interdependence.
10:00-10:30: Coffee Break
10:30-10:50: Paper Blitz Talk Session 1
Format: 5min presentation per paper
- Affective learning and the charismatic lecturer. Vered Aharonson, Aviad Malachi and Tal Katz-Navon.
- Social Event Context and Affect Prediction in Group Videos. Aarti Malhotra, Garima Sharma, Rishikesh Kumar, Abhinav Dhall and Jesse Hoey.
- Are facial expression technologies tools for social interaction analysis rather than for emotion recognition? Bronagh Allison and Gary Mckeown.
- ColEmo: A Flexible Open Source Software Interface for Collecting Emotion Data. Mohammad Hasan Rahmani, Rafael Berkvens and Maarten Weyn.
10:50-11:20: Poster Session 1
11:20-12:00: Brainstorming in Groups 1
12:00-13:30: Lunch Break
13:30-14:15: Keynote 2: Giovanna Paola Varni. Which roles does affect play in social contexts?
Abstract — The role played by affect in social context, although a
major goal of affective computing, has received less attention compared to modelling and synthesizing individual affect. In my talk I will present my studies on affect in dyadic and team social context to show how affect can be either shaper or a by-product of interaction.
14:20-14:40: Paper Blitz Talk Session 2
Format: 5 min presentation per paper
- Comparative Analysis of Vocal and Textual Emotion Detection and their association with Consumer Preferences: An Empirical Study. Qiurui Chen, Laduona Dai and Nino Hardt.
- End-to-end Continuous Speech Emotion Recognition in Real-life Customer Service Call Center Conversations. Yajing Feng and Laurence Devillers.
- Vocalization for Emotional Communication in Crossmodal Affective Display. Pranavi Jalapati, Selwa Sweidan, Xin Zhu and Heather Culbertson.
14:45-15:15: Poster Session 2
15:15-15:30: Coffee Break
15:30-16:10: Brainstorming in Groups 2
16:10-16:40: Plenary Discussion
16:40-16:50: Closing
Sunday Workshop 4
Workshop on Social and Affective Intelligence (SAI)
Organizers: Leena Mathur, Dong Won Lee, Micol Spitale, Cynthia Breazeal, Louis-Philippe Morency
Room: E15-359
Time: 13:30 – 17:00 (Half-day)
13:30 – 13:40: Introduction
13:40 – 14:10: Keynote 1: Ralph Adolphs. Engineering Emotion for Affective Intelligence
Abstract — Affective Computing has made significant progress in classifying emotion labels from videos, voice, text and multimodal data. This success introduces a number of challenges. First, all supervised approaches hinge on the validity of the labels in the first place; a considerable body of recent work in psychology demonstrates that these are generally invalid, culturally-biased, and ignore context. Second, these issues can extend to the production of emotion signals in machines and robots (which can provide useful mimicry, but not genuine). The answer to both challenges is, in a sense, to engineer emotions in reverse. Rather than cobbling together relatively shallow applications, we should build in emotions from the start. This would facilitate future extensions that are currently problematic, such as incorporating context, and would ensure that future systems are more transparent about their affective capabilities during human-machine interactions. This talk will review current challenges and provide an overview of what is known from psychology and neuroscience.
14:10 – 14:40: Keynote 2: Mohammad Soleymani. Generalization and Personalization in Emotion Recognition
Abstract — Recent developments in machine learning and signal processing have led to unprecedented opportunities for advancing emotion recognition systems. This talk will first provide a high-level overview of the current state of emotion recognition systems, as well as core challenges towards generalizing and personalizing these systems. This talk will also discuss future directions in emotion recognition research and an opportunity for the audience to engage in open discussion about these ideas.
14:40 – 15:05: Spotlight Papers #1
Format: 5 min presentation + 3min Q&A per paper
- Empowering Dialogue Systems with Affective and Adaptive Interaction: Integrating Social Intelligence. Yuanchao Li and Catherine Lai.
- Investigating Large Language Models’ Perception of Emotion Using Appraisal Theory. Nutchanon Yongsatianchot, Parisa Torshizi and Stacy Marsella.
- Assessing the Impact of Personality on Affective States from Video Game Communication. Atieh Kashani, Johannes Pfau and Magy Seif El-Nasr.
15:05 – 15:30: Coffee Break
15:30 – 15:55: Spotlight Papers #2
Format: 5 min presentation + 3min Q&A per paper
- How (not) to Evaluate Computational Empathy: Testing the Assumptions of the Evaluation Methods in a Use Case. Özge Nilay Yalçın.
- Informative Speech Features based on Emotion Classes and Gender in Explainable Speech Emotion Recognition. Ediz Yildirim and Deniz Iren.
- Mutual Cross-Attention in Dyadic Fusion Networks for Audio-Video Emotion Recognition. Jiachen Luo, Huy Phan, Lin Wang and Joshua Reiss.
15:55 – 16:25: Keynote 3: Hae Won Park. Emotional Intelligence for Long-Term, Personalized Social Robot Interactions
Abstract — This talk will provide an overview of current research in social robots and machines that are capable of personalization during long-term interactions with humans. A key challenge in this area is to create machines that can attend to the unique needs and goals of individuals during long-term interactions. This talk will discuss insights from the development of systems deployed in real-world settings to support human well-being in social contexts spanning early childhood education, healthcare, eldercare, and emotional wellness. This talk will discuss core challenges and future directions in advancing emotionally-intelligent machines that are capable of personalized long-term social interactions.
16:25 – 16:55: Group Brainstorming
16:55 – 17:00: Paper Awards
Sunday Workshop 5
EPiC 2023: The Emotion Physiology and Experience Collaboration
Organizers: Stanisław Saganowski, Bartosz Perz, Maciej Behnke, Nicholas A. Coles
Room: E14-240
Time: 9:00 – 12:00 (Half-day)
09:00 – 9:10: Opening remarks: Maciej Behnke
09:10 – 10:10: Keynote talk: Lisa Feldman Barrett. Context Reconsidered: Population thinking, relational realism and the study of emotion
10:10 – 10:30: Coffee Break
10:30 – 12:00: EPiC Challenge Session:
- Introduction to the EPiC Challenge, Bartosz Perz. 10 minutes
- Ensemble Learning to Assess Dynamics of Affective Experience Ratings and Physiological Change, Felix Dollack, Kiyoshi Kiyokawa, Huakun Liu, Monica Perusquía-Hernández, Chirag Raman, Hideaki Uchiyama and Xin Wei; 15 minutes.
- Deep Learning Analysis of Electrophysiological Series for Continuous Emotion Detection, Javier Orlando Pinzon-Arenas, Luis Roberto Mercado-Diaz, Julian Tejada, Fernando Marmolejo-Ramos, Carlos Barrera-Causil, Jorge Ivan Padilla, Raydonal Ospina and Hugo Posada-Quintero. 15 minutes
- Affective Computing as a Tool for Understanding Emotion Dynamics from Physiology: A Predictive Modeling Study of Arousal and Valence, Tomás Ariel D’Amelio, Nicolás Marcelo Bruno, Leandro A. Bugnon, Federico Zamberlan, and Enzo Tagliazucchi. 15 minutes
- The EPiC Challenge results, Stanisław Saganowski. 10 minutes
- EPiC Challenge Panel Q&A: Moderated by Nicholas A. Coles. 15 minutes
- An EPiC Future, Nicholas A. Coles. 10 minutes
Sunday Workshop 6
Moral Imagination in Affective Computing
Organizers: Amanda McCroskery, Daniel McDuff, Brendan Jou, Alice Moloney, Geoff Keeling, Ben Zevenbergen, Shri Narayanan, Hatice Gunes, Jesse Hoey, Steven Kelts, Luke Stark
Room: Silverman Skyline
Time: 09:00 – 17:00 (Full day)
09:00 – 09:30: Welcome
09:30 – 10:15: Reflection: Moral Intuitions
10:15 – 10:30: Coffee Break
10:30 – 12:00: Reflection: Values Spotting, Interpretation & Tensions
12:00 – 13:15: Lunch
13:15 – 15:00: Expansion: Moral Compass & Possible World Scenario
15:00 – 15:15: Coffee Break
15:15 – 17:00: Action: Emerging Responsibility Themes, Imaginative Steering, & Closing
What’s Next in Affect Modeling?
Organizers: Matthew Barthet, Konstantinos Makantasis, Georgios N. Yannakakis, Bjoern Schuller, Guoying Zhao
Room: E14-244
Time: 09:00 – 12:50 (Half day)
09:00 – 09:10: Introduction
09:10 – 10:10: Oral Session 1
Format: 20 minutes oral presentation for each paper (15 minutes presentation + 5 minutes questions).
- Active Learning for a Classroom Observer who Can’t Time Travel. Andres Felipe Zambrano, Ryan S. Baker and Andrew Lan.
- Transformer-based Self-supervised Representation Learning for Emotion Recognition Using Bio-signal Feature Fusion. Shrutika Sawant, Erick F. X., Jaspar Pahl, Pulkit Arora, Andreas Foltyn, Nina Holzer and Theresa Götz.
- Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation. Matthew Barthet, Chintan Trivedi, Kosmas Pinitas, Emmanouil Xylakis, Konstantinos Makantasis, Antonios Liapis and Georgios N. Yannakakis.
10:10 – 10:30: Coffee Break
10:30 – 11:50: Oral Session 2
Format: 20 minutes oral presentation for each paper (15 minutes presentation + 5 minutes questions).
- A Privacy-Preserving Multi-Task Learning Framework For Emotion and Identity Recognition from Multimodal Physiological Signal. Mohamed Benouis, Yekta Said Can and Elisabeth André.
- An Automated Data Cleaning Framework for Improving Facial Expression Classification. Anis Elebiary, Saandeep Aathreya Sidhapur Lakshminarayan and Shaun Canavan.
- What’s Next in Affective Modeling? Large Language Models. Nutchanon Yongsatianchot, Tobias Thejll-Madsen and Stacy Marsella.
- Modeling Player Personality Factors from In-Game Behavior and Affective Expression. Reza Habibi, Johannes Pfau and Magy Seif El-Nasr.
11:50 – 12:50: Keynote (TBA)
Sunday Opening Reception at the MIT Museum
18:00 – 21:00
Opening Reception at the MIT Museum. Come join us at the MIT Museum for a night of science, networking, and snacks!
Monday, 11th September 2023
9:00 – 9:15: Opening
Room: Multipurpose room (MR)
9:15 – 10:15: Keynote: Louis-Philippe Morency. What is multimodal?
Room: MR
Chair: Agata Lapedriza
Abstract — Our experience of the world is multimodal – we see objects, hear sounds, feel texture, smell odors, and taste flavors. In recent years, a broad and impactful body of research emerged in artificial intelligence under the umbrella of multimodal, characterized by multiple modalities. As we formalize a long-term research vision for multimodal research, it is important to reflect on its foundational principles and core technical challenges. What is multimodal? Answering this question is complicated by the multi-disciplinary nature of the problem, spread across many domains and research fields. Two key principles have driven many multimodal innovations: heterogeneity and interconnections from multiple modalities. Historical and recent progress will be synthesized in a research-oriented taxonomy, centered around 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification. The talk will conclude with open questions and unsolved challenges essential for a long-term research vision in multimodal research.
10:15 – 10:30: Coffee Break (Winter Garden Room)
10:30 – 12:00: Annotation and Social Context of Emotion
Room: MR
Chair: Monica Perusquía-Hernández
Format: 14 mins for presentation, 4 mins for questions and changeover
- Belief Mismatch Coefficient (BMC): A Novel Interpretable Measure of Prediction Accuracy for Ambiguous Emotion States, Jingyao Wu, Ting Dang, Vidhyasaharan Sethu and Eliathamby Ambikairajah (University of New South Wales, Australia, Nokia Bell Labs, UK)
- Analyzing the Effect of Affective Priming on Emotional Annotations Luz Martinez-Lucas, Ali Salman, Seong-Gyun Leem, Shreya Upadhyay, Chi-Chun Lee and Carlos Busso (University of Texas at Dallas, USA)
- How Expression and Context Determine Second-person Judgments of Emotion, Jessie Hoegen, Gale Lucas, Danielle Shore, Brian Parkinson and Jonathan Gratch (University of Southern California, USA, University of Oxford, UK)
- Sources of Facial Expression Synchrony, Su Lei and Jonathan Gratch (University of Southern California, USA)
- Recognizing Conversational State from the Eye Using Wearable Eyewear, Siyuan Chen and Julien Epps (University of New South Wales, Australia)
10:30 – 12:00: Machine Learning & Generative AI
Room: Silverman Skyline Room (SSR)
Chair: Oggi Rudovic
- Active Learning with Contrastive Pre-training for Facial Expression Recognition, Shuvendu Roy and Ali Etemad (Queen’s University, Canada)
- PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches For Speech Emotion Recognition Using Pre-trained Speech Models, Tiantian Feng and Shrikanth Narayanan (University of Southern California, USA)
- On the use of Vision-Language models for Visual Sentiment Analysis: a study on CLIP, Cristina Bustos, Carles Civit, Brian Du, Albert Sole-Ribalta and Agata Lapedriza (Universitat Oberta de Catalunya, Spain)
- Emotion-Controllable Impression Utterance Generation for Visual Art, Ryo Ueda, Hiromi Narimatsu, Yusuke Miyao and Shiro Kumano (The University of Tokyo; NTT Communication Science Laboratories, Japan)
- The Affective Nature of Generative News Images: Impact on Visual Journalism, Sejin Paik, Sarah Bonna, Ekaterina Novozhilova, Ge Gao, Jongin Kim, Derry Wijaya and Margrit Betke (Boston University, USA)
12:00 – 13:15: Lunch Break (The Street)
13:15 – 14:45: Large Language Models,
Room: MR
Chair: Krishna Somandepalli
- Contextual Emotion Estimation from Image Captions, Vera Yang, Archita Srivastava, Yasaman Etesam, Chuxuan Zhang and Angelica Lim (Simon Fraser University, Canada)
- Context Unlocks Emotions: Text-based Emotion Classification Dataset Auditing with Large Language Models, Daniel Yang, Aditya Kommineni, Mohammad Alshehri, Nilamadhab Mohanty, Vedant Modi, Jonathan Gratch and Shrikanth Narayanan (University of Southern California, USA)
- Exploring ChatGPT’s Empathic Abilities, Kristina Schaaff, Caroline Reinig and Tim Schlippe (IU International University of Applied Sciences, Germany)
- Fine-grained Affective Processing Capabilities Emerging from Large Language Models, Joost Broekens, Bernhard Hilpert, Suzan Verberne, Kim Baraka, Patrick Gebhard and Aske Plaat (Leiden University; Vrije Universiteit Amsterdam, The Netherlands, DFKI, Germany)
- Is GPT a Computational Model of Emotion?, Ala Nekouvaght Tak and Jonathan Gratch (University of Southern California, USA)
13:15 – 14:45: Affective and Interactive fMRI (Special Session)
Room: Silverman Skyline Room (SSR)
Chair: Ray Lee
- Reciprocal Dyadic Affective Interaction: from Facial Expressions to Brain Networks, Ray Lee, Joshua Friedman, Paul Sajda, and Nim Tottenham (Columbia University, USA)
- Neuroplastic Effects of Meditation-Based Neurofeedback on the Pathological Brain, Clemens Bauer, Jiahe Zhang, Yendiki Anastasia, Randy Auerbach, Margaret Niznikiewicz, and Susan Whitfield-Gabrieli (Northeastern University, Columbia University, Harvard University, USA)
- Affective Brain Development in Concert with Caregivers, Nim Tottenham (Columbia University)
- Capturing Interactions between Arousal and Cortical Dynamics with Simultaneous Pupillometry and EEG-fMRI, Linbi Hong, Hengda He, and Pau Sajda (Columbia University, USA)
14:45 – 15:00: Coffee Break (Winter Garden Room)
15:00 – 16:30: Emotion Recognition
Room: Silverman Skyline Room (SSR)
Chair: Laurence Devillers
- Modeling Messaging Metadata to Identify Digital Disagreements among Non-incarcerated Adolescents in the Juvenile Justice System, Harshit Pandey, Christie Rizzo, Charlene Collibee and Aarti Sathyanarayana (Northeastern University; Brown University, USA)
- Automated Emotional Valence Estimation in Infants with Stochastic and Strided Temp Sampling, Mang Ning, Itir Onal Ertugrul, Daniel Messinger, Jeffrey Cohn and Albert Ali Salah (Utrecht University, The Netherlands, University of Miami; University of Pittsburgh, USA)
- Accommodating Missing Modalities in Time-Continuous Multimodal Emotion Recognition, Juan Vazquez-Rodriguez, Grégoire Lefebvre, Julien Cumin and James Crowley (Orange Innovation; Université Grenoble Alpes, France)
15:00 – 16:30: Late Breaking Results Flash Talks
Room: MR
Chair: Jeffrey Girard
Format: 4 mins for presentation
- Ethical Risks, Concerns, and Practices of Affective Computing: A Thematic Analysis, Deniz Iren, Ediz Yildirim and Krist Shingjergji (Open Universiteit, The Netherlands)
- Speech Emotion Classification from Affective Dimensions: Limitation and Advantage, Meysam Shamsi (Le Mans University, France)
- Early vs. Late Multimodal Fusion for Recognizing Confusion in Collaborative Tasks, Anisha Ashwath, Michael Peechatt, Cecilia Alm and Reynold Bailey (Rochester Institute of Technology, USA)
- Grounding the Evaluation of Affect Recognition Models Beyond Dataset-Based Ground Truths, Carles Civit and Agata Lapedriza (Universitat Oberta de Catalunya, Spain)
- Predicting Stress and Providing Counterfactual Explanations: A Pilot Study on Caregivers, Kei Shibuya, Zachary King, Maryam Khalid, Han Yu, Yufei Shen, Khadija Zanna, Ryan Brown, Marzieh Majd, Christopher Fagundes and Akane Sano (Rice University; The University of Texas Department of Electrical and Computer Engineering; University of California San Francisco; Harvard Medical School, USA)
- Computing Beneficence: a Study of Pro-Social Attitudes in Comments of Online Social Media Users, Maurizio Mancini, Matteo Cinelli, Radoslaw Niewiadomski, Gabriele Etta, Paul Ciurla, Walter Quattrociocchi and Valentina Franzoni (Sapienza University of Rome; University of Genoa; University of Perugia, Italy, Hong Kong Baptist University; Hong Kong)
- Relatable and Humorous Videos Reduce Hyperarousal in Math Exams, Fettah Kiran, Amanveer Wesley, Tammy Tolar, Paul Cirino, Panagiotis Tsiamyrtzis and Ioannis Pavlidis (University of Houston , USA, Milano Politecnico, Italy)
- Analyzing the contribution of different passively collected data to predict Stress and Depression, Irene Bonafonte, Cristina Bustos, Abraham Larrazolo, Gilberto Lorenzo Martínez Luna, Adolfo Guzmán Arenas, Xavier Baró, Isaac Tourgeman, Mercedes Balcells and Agata Lapedriza (Helmholtz Munich, Germany, Universitat Oberta de Catalunya, Spain, Instituto Politecnico Nacional, Mexico, Albizu University; Massachusetts Institute of Technology, USA)
- Investigating Social Interaction Patterns with Depression Severity across Different Personality Traits Using Digital Phenotyping, Ohida Binte Amin, Varun Mishra and Aarti Sathyanarayana (Northeastern University, USA)
- Temporal Arcs of Mental Health: Patterns Behind Changes in Depression over Time, Laura Biester, James Pennebaker and Rada Mihalcea (Middlebury College; University of Texas at Austin; University of Michigan, USA)
- How well can PPG be an alternative to ECG for acute stress detection in nonresting state?—Comprehensive evaluation from heart rate variability to model interpretability using a deep neural network, Yasuhide Hyodo, Kiyoshi Yoshikawa, Takanori Ishikawa and Yota Komoriya (Sony Corporation, Japan)
- Affectively There?: Exploring the Physiological Correlates of Perspective Taking in Virtual interpretability, Caglar Yildirim and D. Fox Harrell (Massachusetts Institute of Technology, USA)
- Whiff: Olfactory Interfaces for Motivation and Productivity, Amelia Gan, Ana Merla and Naana Obeng-Marnu (Harvard University Graduate School of Design; Massachusetts Institute of Technology, USA)
- Brain-Facial Expression Interface for Emotional Communication, Shinya Shimizu, Airi Ota, Ai Nakane and Takao Nakamura (Nippon Telegraph and Telephone Corporation, Japan)
- Human-AI Collaboration for the Detection of Deceptive Speech, Abdullah Aman Tutul, Theodora Chaspari, Sarah Ita Levitan and Julia Hirschberg (Texas A&M University; Hunter College; Columbia University, USA)
- Fostering Parent-Child Interactions through Behavioral Understanding of Synchrony, Jocelyn Shen, Ying Li, Javaria Hassan, Sharifa Alghowinem, Cynthia Breazeal, Hae Won Park and Rosalind Picard (Massachusetts Institute of Technology, USA)
- Designing Conversational Agents for Emotional Self-Awareness, Jocelyn Shen, Kimaya Lecamwasam, Hae Won Park, Cynthia Breazeal and Rosalind Picard (Massachusetts Institute of Technology, USA)
- Anthropomorphic eHMI Design of Autonomous Vehicles in Roblox to Change Road Users’ Behavior, Dokshin Lim, Yongjun Kim, Hagyeong Gwon and Yeonghwan Shin (Hongik University, South Korea)
- Sound and Visual Entrainment in Affective, Bio-Responsive Multi-User VR Interactives, Meehae Song, Steve DiPaola and Servet Ulas (Simon Fraser University, Canada)
- Design of an Emotion-Aware Painting Application With an Interactional Approach for Virtual Reality, Jungah Son, Marko Peljhan, George Legrady and Misha Sra (UC Santa Barbara, USA)
16:30 – 18:00: Panel Session: Ethical Affective Computing in Practice
Room: MR
Chair: Desmond Ong
Panelists:
Amanda McCroskery (Google Research)
Andreas Häuselmann (eLaw, Center for Law and Digital Technologies, Leiden University)
Javier Hernandez (Human Understanding and Empathy (HUE) Group, Microsoft Research)
Sherry Turkle (Massachusetts Institute of Technology)
Alan Cowen (Hume AI)
Abstract —Join us for a discussion on Ethical Affective Computing, with a focus on its practical implications. The panel will bring together diverse perspectives to discuss the implementation of ethical and legal considerations in real use cases. We will delve into current considerations regarding emotion recognition technology in commercial applications, as well as generative technologies like Large Language Models and image-generation AI. How has industry deployed or planned to deploy Affective Computing technologies in an ethical manner? How can academic researchers contribute to this conversation? Please, bring your questions and be part of the discussion!
Tuesday, 12th September 2023
9:15 – 10:15: Keynote: Grace Ahn. Multimodal Extensions of Realities
Room: MR
Chair: Akane Sano
Abstract — With the rapid rise of multimodal platforms, such as virtual reality (VR), augmented reality (AR), and mixed reality (MR), emerging science indicates that human experiences in immersive virtual environments are shaped by the dynamic interactions between user characteristics, media features, and situational contexts. How do virtual experiences influence human minds and how do they transfer into the physical world to continue impacting thoughts, feelings, and behaviors? This talk will cover findings from nearly two decades of lab and field research on multimodal extensions of our realities, with an emphasis on the media psychological mechanisms and outcomes of user experiences.
10:15 – 10:30: Coffee Break (Winter Garden Room)
10:30 – 12:00: Virtual Reality and Agents
Room: MR
Chair: Timothy Bickmore
- Exploration of Physiological Arousal in Divergent and Convergent Thinking using 2D screen and VR Sketching Tools, Samory Houzangbe, Sylvain Fleury, Dimitri Masson, David Gomez Jauregui, Jeremy Legardeur, Nadine Couture and Simon Richir (HESAM Université; Université de Bordeaux, France)
- Social Presence Mediates Audience Behavior Effects on Social Stress in Virtual Public Speaking, Celia Kessassi, Mathieu Chollet, Cédric Dumas and Caroline G. L. Cao (IMT Atlantique, France, University of Glasgow, UK)
- Effects of Social Ingroup Cues on Empathy Towards an Intelligent Virtual Agent With a Mixed-Cultural Background, David Obremski, Paula Friedrich, Philipp Schaper and Birgit Lugrin (University of Würzburg, Germany)
- Validating a virtual human and automated feedback system for training doctor-patient communication skills, Kurtis Haut, Caleb Wohn, Benjamin Kane, Thomas Carroll, Catherine Guigno, Varun Kumar, Ron Epstein, Lenhart Schubert and Ehsan Hoque (University of Rochester, USA)
- A New Task for Predicting Emotions and Dialogue Strategies in Task-Oriented Dialogue, Lorraine Vanel, Alya Yacoubi and Chloé Clavel (Telecom-Paris, France)
10:30 – 12:00: Bias & Ethics
Room: SSR
Chair: Hatice Gunes
- EU law and emotion data, Andreas Häuselmann, Alan M. Sears, Lex Zard and Eduard Fosch-Villaronga (Leiden University, The Netherlands)
- Robustness Analysis uncovers Language Proficiency Bias in Emotion Recognition Systems, Quynh Tran, Krystsina Shpileuskaya, Elaine Zaunseder, Josef Salg, Larissa Putzar and Sven Blankenburg (PricewaterhouseCoopers; Heidelberg University; Hamburg University of Applied Sciences, Germany)
- The effects of gender bias in word embeddings on patient phenotyping in the mental health domain, Gizem Sogancioglu, Heysem Kaya and Albert Ali Salah (Utrecht University, The Netherlands)
- “It’s not Fair!” – Fairness for a Small Dataset of Multi-Modal Dyadic Mental Well-being Coaching, Jiaee Cheong, Micol Spitale and Hatice Gunes (University of Cambridge, USA)
- Towards affective computing that works for everyone, Tessa Verhoef and Eduard Fosch-Villaronga (Leiden University, The Netherlands)
12:00 – 13:15: Lunch Break (Winter Garden Room)
13:15 – 14:45: Health (Session 1)
Room: MR
Chair: Roland Goecke
- Predicting Loneliness from Subject Self-Report, Liza Jivnani, Fallon Goodman, Jon Rottenberg and Shaun Canavan (University of South Florida; George Washington University, USA)
- On Scalable and Interpretable Autism Detection from Social Interaction Behavior, William Saakyan, Matthias Norden, Lola Herrmann, Simon Kirsch, Muyu Lin, Simon Guendelman, Isabel Dziobek and Hanna Drimalla (Bielefeld University; Humboldt Universität zu Berlin; Medical Center-University of Freiburg, Germany)
- Detecting PTSD Using Neural and Physiological Signals: Recommendations from a Pilot Study, Manasa Kalanadhabhatta, Shaily Roy, Trevor Grant, Asif Salekin, Tauhidur Rahman and Dessa Bergen-Cico (University of Massachusetts Amherst; Syracuse University; University of California San Diego, USA)
- Expresso-AI: An Explainable Video-Based Deep Learning Models for Depression Diagnosis, Felipe Moreno, Sharifa Alghowinem, Hae Won Park and Cynthia Breazeal (Massachusetts Institute of Technology, USA)
- Gaze and Head Movement Patterns of Depressive Symptoms During Conversations with Emotional Virtual Humans, Javier Marín-Morales, Jose Llanes-Jurado, Maria Eleonora Minissi, Lucía Gómez-Zaragozá, Alberto Altozano and Mariano Alcañiz (Universitat Politècnica de València, Spain)
13:15 – 14:45: Affective Robotics (Special Session)
Room: SSR
Chair: Antonio Chella
Long Presentations: 12 mins presentation, 3 mins Q&A
- The Effects of Stress and Predation on Pain Perception in Robots, Louis L’Haridon and Lola Cañamero (CY Cergy Paris University / ENSEA / CNRS, France)
- Affect-Based Planning for a Meta-Cognitive Robot Sculptor: First Steps, Selmer Bringsjord, John Slowik, Naveen Sundar Govindarajulu, Michael Giancola, James Oswald and Rikhiya Ghosh (Rensselaer Polytechnic Institute, USA; Icahn School of Medicine at Mount Sinai, USA)
- Inner Speech and Extended Consciousness: a Model Based on Damasio’s Theory of Emotions, Sophia Corvaia, Arianna Pipitone, Angelo Cangelosi and Antonio Chella (University of Palermo, Italy; The University of Manchester, UK)
- Moral Context Matters: A Study of Adolescents’ Moral Judgment Towards Robots, Andrea Luna Tacci, Federico Manzi, Cinzia Di Dio, Antonella Marchetti, Giuseppe Riva and Davide Massaro (Università Cattolica del Sacro Cuore, Milan, Italy)
Short Presentations: 6 mins presentation, 1.5 mins Q&A
- Understanding pleasure, arousal and dominance, and how to map them to a robot+avatar behavior model, Fabrizio Nunnari, Matteo Lavit Nicora, Pooja Prajod, Sebastian Beyrodt, Lara Chehayeb, Elisabeth Andre, Patrick Gebhard, Matteo Malosio and Dimitra Tsovaltzi (DFKI, Saarbrücken, Germany; University of Bologna, Italy; STIIMA, National Research Council of Italy, Lecco, Italy; University of Augsburg, Germany)
- Towards the Evaluation of the Role of Embodiment in Emotions Elicitation, Silvia Rossi, Alessandra Rossi and Sara Sangiovanni (University of Naples Federico II Napoli, Italy)
- The Art of Inquiry: Toward Robots that Infer Speech and Movement Characteristics, Morten Roed Frederiksen and Kasper Stoy (IT-University of Copenhagen Denmark)
- Social Impressions of the NAO Robot and its Impact on Physiology, Ruchik Mishra and Karla Welch (University of Louisville, USA))
14:45 – 15:00: Coffee Break (Winter Garden Room)
15:00 – 16:30: Demos and LBR Posters
Room: MR
Chairs: Jeffrey Girard & Prasanth Murali
Demos
- SAPIEN: Affective Virtual Agents Powered by Large Language Models, Masum Hasan, Cengiz Ozel, Sammy Potter and Ehsan Hoque, (University of Rochester)
- Tiltometer: Real-Time Tilt Recognition in Esports, Thorben Ortmann, Sune Maute, Franziska Heil, Kilian Hildebrandt, Pedram Berendjy Jorshery and Larissa Putzar, (Hamburg University of Applied Science)
- PARK: Parkinson’s Analysis with Remote Kinetic-tasks, Md Saiful Islam, Sangwu Lee, Abdelrahman Abdelkader, Sooyong Park and Ehsan Hoque, (University of Rochester)
- PyAFAR: Python-based Automated Facial Action Recognition library for use in Infants and Adults, Saurabh Hinduja, Itir Onal Ertugrul, Maneesh Bilalpur, Daniel S. Messinger and Jeffrey F. Cohn, (University of Pittsburgh, Utrecht University, University of Miami)
- MDE – Multimodal Data Explorer for flexible visualization of multiple data streams, Isabelle Arthur, Jordan Quinn, Rajesh Titung, Cecilia Ovesdotter Alm and Reynold Bailey, (Rochester Institute of Technology)
- Emognition system – wearables, physiology, and machine learning for real-life emotion capturing, Dominika Kunc, Joanna Komoszynska, Bartosz Perz, Stanislaw Saganowski and Przemyslaw Kazienko, (Wroclaw University of Science and Technology)
- Digital Art Therapy with Gen AI: Mind Palette, Daeun Yoo, David Y.J. Kim and Elisandra Lopes, ( Harvard University, Massachusetts Institute of Technology, Riverside Community Care)
- Open-Sheep-Face: A Comprehensive Application for Sheep Face Analysis and Pain Estimation, Zejian Feng, Martina Karaskova and Marwa Mahmoud, (University of Glasgow)
15:00 – 16:30: Sponsors Demo/Presentation
Room: MR
Chair: Ben Leong
Demos/Presentations:
- Empatica
- Lenovo
- ETS
- Hume.ai
18:00: Conference Banquet (Boat cruise)
Buses arrive at MIT at 17:15, don’t miss them! MUST ARRIVE BY 17:30!
Wednesday, 13th September 2023
9:15 – 10:15: Keynote: Justin Baker. Sensing Psychosis: Deep Phenotyping of Neuropsychiatric Disorders
Room: MR
Chair: Jeffrey Girard
Abstract — Traditional psychiatric care relies on subjective assessments, hindering progress in personalized treatments. However, pervasive computing offers unprecedented opportunities to develop dynamic models of mental illness by quantifying individual behavior over time and applying latent construct models. By transcending the precision-personalization dichotomy, we can revolutionize therapeutic discovery through unobtrusive, quantitative behavioral phenotyping. This presentation explores the integration of affective computing in severe mental illnesses such as depression, bipolar disorder, and schizophrenia. Affective computing enhances our understanding of illness fluctuations, contextual factors, and treatment interventions, enabling the identification of causal relationships and targeted interventions for specific neural circuits. By employing single-case experimental designs, we demonstrate the potential of affective computing to reshape psychiatric research and clinical practice. This technological integration paves the way for a closed-loop, personalized approach that optimizes care for individuals seeking treatment.
10:15 – 10:30: Coffee Break (Winter Garden Room)
10:30 – 12:00: Health (Session 2) and Physiology
Room: MR
Chair: Nadia Berthouze
- Video-based estimation of pain indicators in dogs, Hongyi Zhu, Yasemin Salgırlı, Pınar Can, Durmuş Atılgan and Albert Ali Salah (University of Amsterdam; Utrecht University, The Netherlands, Ankara University, Turkey)
- A Weakly Supervised Approach to Emotion-change Prediction and Improved Mood Inference, Soujanya Narayana, Ibrahim Radwan, Ravikiran Parameshwara, Iman Abbasnejad, Akshay Asthana, Ramanathan Subramanian and Roland Goecke (University of Canberra; Seeing Machines Ltd, Australia)
- Multimodal assessment of best possible self as a self-regulatory activity for the classroom, Batuhan Sayis, Marc Beardsley and Marta Portero-Tresserra (Universitat Pompeu Fabra; Universitat Autonoma de Barcelona, Spain)
- AttentioNet: Monitoring Student Attention Type in Learning with EEG-Based Measurement System, Dhruv Verma, Sejal Bhalla, S.V. Sai Santosh, Saumya Yadav, Aman Parnami and Jainendra Shukla (University of Toronto, Canada, NVIDIA Corporation, USA, IIIT-Delhi, India)
- Do You Even Need Sensors?: Synthetic Biomusic as an Empathic Technology, Daway Chou-Ren, Mike Winters, Javier Hernandez, Daniel McDuff, Jina Suh, Vanessa Rodriguez, Gonzalo Ramos and Mary Czerwinski (Microsoft Research, USA, Brasil)
10:30 – 12:00: Natural Language Processing
Room: SSR
Chair: Mohammad Soleymani
- Assessing Affective Engagement with Digitally-Delivered Narratives of Invisible Disability, Daniel Kessler, David Y.J. Kim, Grace Ahn, Neska Elhaouij and Rosalind Picard (Massachusetts Institute of Technology, USA)
- Using Comments for Predicting the Affective Response to Social Media Posts, Yi-Chia Wang, Jane Dwivedi-Yu, Robert E. Kraut and Alon Halevy (Meta, Carnegie Mellon University, USA)
- Sentiment Analysis for Shona, Barlette Makuwe, Koena Mabokela and Tim Schlippe (IU International University of Applied Sciences, Germany, University of Johnnesburg, South Africa)
- Therapist Empathy Assessment in Motivational Interviews, Leili Tavabi, Trang Tran, Brian Borsari, Joannalyn Delacruz, Joshua Woolley, Stefan Scherer and Mohammad Soleymani (University of Southern California; University of California San Francisco, USA)
12:00 – 13:15: Lunch Break (Winter Garden Room)
13:15 – 14:45: Tools & Datasets
Room: MR
Chair: Marwa Mahmoud
- An Intelligent Infrastructure Toward Large Scale Naturalistic Affective Speech Corpora Collection, Shreya G. Upadhyay, Woan-Shiuan Chien, Bo-Hao Su, Lucas Goncalves, Ya-Tse Wu, Ali N. Salman, Carlos Busso and Chi-Chun Lee (National Tsing Hua University, Taiwan, The University of Texas at Dallas, USA)
- CORAE: A Tool for Intuitive and Continuous Retrospective Evaluation of Interactions, Michael Sack, Maria Teresa Parreira, Jenny Fu, Asher Lipman, Hifza Javed, Nawid Jamali and Malte Jung (Cornell University, Honda Research Institute, USA)
- FB-SEC-1: A Social Emotion Cause Dataset, Abdullah Alsaedi, Floriana Grasso, Stuart Thomason and Phillip Brooker (University of Liverpool, UK)
- DynAMoS: The Dynamic Affective Movie Clip Database for Subjectivity Analysis, Jeffrey Girard, Yanmei Tie and Einat Liebenthal (University of Kansas; Harvard Medical School, USA)
- FabricTouch: A Multimodal Fabric Assessment Touch Gesture Dataset to Slow Down Fast Fashion, Temitayo Olugbade, Lili Lin, Alice Sansoni, Nihara Warawita, Yuanze Gan, Xijia Wei, Bruna Petreca, Giuseppe Boccignone, Douglas Atkinson, Youngjun Cho, Sharon Baurley and Nadia Berthouze (University College London; Royal College of Art; Manchester Metropolitan University, UK, Università degli Studi di Milano, Italy)
13:15-14:45: Affective Computing & VR in Healthcare Applications (Special Session)
Room: SSR
Chair: Sylvia Pan and Marco Gillies
- Leveraging WiFi Sensing toward Automatic Recognition of Protective Behaviors, Xijia Wei, Temitayo Olugbade, Fangzhan Shi, Shuang Wu, Amanda Williams, Nicolas Gold, Youngjun Cho, Kevin Chetty and Nadia Berthouze (University College London; University of Sussex, UK)
- Eye tracking for affective computing in virtual reality healthcare applications, David Harris, Tom Arthur, Mark Wilson and Sam Vine (University of Exeter, UK)
- Physiological correlates of stress induced by virtual humans in a naturalistic virtual reality scenario, Jonathan Giron, Yulia Golland, Jelena Mladenovic, Maxine Hanrieder and Friedman Doron (Reichman University, Israel; Union University, Serbia)
- Development and Validation of an iPad-based Serious Game for Emotion Recognition and Attention Tracking towards Early Identification of Autism, Chiara Piazzalunga, Pierpaolo Molino, Chiara Giangregorio, Stefania Fontolan, Cristiano Termine and Simona Ferrante (Politecnico di Milano; Università dell’Insubria, Italy)
- Multimodal Prediction of Alexithimia from Physiological and Audio Signals, Valeria Filippou, Nikolas Theodosiou, Mihalis A. Nicolaou, Georgia Panayiotou, Elena Constantinou, Marios Theodorou and Maria Panteli (The Cyprus Institute; University of Cyprus, Cyprus)
14:45 – 15:00: Coffee Break (Winter Garden Room)
15:00 – 17:00: AAAC Townhall & Closing Ceremony
Room: MR