Once the program is finalized it will be available on this page. In the meantime, see the accepted papers below.

Also you can see the format guidelines at the end of this page.

Accepted papers

Title

Authors
Affective Video Database Online Study Michal Gnacek, Ifigeneia Mavridou, Ellen Seiss, Theodoros Kostoulas, Emili Balaguer-Ballester and Charles Nduka
Analysis of Semi-Supervised Methods for Facial Expression Recognition Shuvendu Roy and Ali Etemad
Sensor-Based Emotion Recognition in Software Development: Facial Expressions as Gold Standard Nicole Novielli, Daniela Grassi, Filippo Lanubile and Alexander Serebrenik
Cross-Linguistic Study on Affective Impression and Language for Visual Art Using Neural Speaker Hiromi Narimatsu, Ryo Ueda and Shiro Kumano
The value of mood measurement for regulating negative influences of social media usage: A case study of TikTok Niklas Cosmann, Jana Haberkern, Alexander Hahn, Patrick Harms, Jan Joosten, Katharina Klug and Tanja Kollischan
EmoPain(at)Home: Dataset and Automatic Assessment within Functional Activity for Chronic Pain Rehabilitation Temitayo Olugbade, Raffaele Buono, Amanda de C Williams, Santiago de Ossorno Garcia, Nicolas Gold, Catherine Holloway and Nadia Bianchi-Berthouze
Federated Learning for Affective Computing Tasks Krishna Somandepalli, Hang Qi, Brian Eoff, Alan Cowen, Kartik Audhkhasi, Josh Belanich and Brendan Jou
Extracting Multimodal Embeddings via Supervised Contrastive Learning for Psychological Screening Manasa Kalanadhabhatta, Adrelys Mateo Santana, Deepak Ganesan, Tauhidur Rahman and Adam Grabell
Domain Adaptation for Stance Detection towards Unseen Target on Social Media Ruofan Deng, Li Pan and Chloé Clavel
DeepFN: Towards Generalizable Facial Action Unit Recognition with Deep Face Normalization Javier Hernandez, Daniel McDuff, Ognjen Rudovic, Alberto Fung and Mary Czerwinski
Exploring Multimodal Fusion for Continuous Protective Behavior Detection Guanting Cen, Chongyang Wang, Temitayo A. Olugbade, Amanda C. De C. Williams and Nadia Bianchi-Berthouze
Mental Health Indices and Structured Biomarker Reports for Assistive Mental Healthcare Rahul Majethia, Vadlamudi Pratiksha Sharma and Rishika T D
The influence of emotional expressions of an industrial robot on human collaborative decision-making Koki Usui, Kazunori Terada and Celso de Melo
Automatic Detection of Subjective, Annotated and Physiological Stress Responses from Video Data Matthias Norden, Oliver T. Wolf, Lennart Lehmann, Katja Langer, Christoph Lippert and Hanna Drimalla
Audio and ASR-based Filled Pause Detection Aggelina Chatziagapi, Dimitris Sgouropoulos, Constantinos Karouzos, Thomas Melistas, Theodoros Giannakopoulos, Athanasios Katsamanis and Shrikanth Narayanan
Emotion Recognition with Pre-Trained Transformers Using Multimodal Signals Juan Vazquez-Rodriguez, Gregoire Lefebvre, Julien Cumin and James Crowley
Label Uncertainty Modeling and Prediction for Speech Emotion Recognition using t-Distributions Navin Raj Prabhu, Nale Lehmann-Willenbrock and Timo Gerkmann
Assistive Video Filters for People with Parkinson’s Disease to Remove Tremors and Adjust Voice Kurtis Haut, Adira Blumenthal, Sarah Atterbury, Xiaofei Zhou, Wasifur Rahman, Emanuela Natali, Rafayet Ali and Ehsan Hoque
Context- and movement-aware analysis of physiological responses in the urban environment using wearable sensors Dimitra Dritsa and Nimish Biloria
Consistent Smile Intensity Estimation From Wearable Optical Sensors Katsutoshi Masai, Monica Perusquía-Hernández, Maki Sugimoto, Shiro Kumano and Toshitaka Kimura
Evaluating Temporal Patterns in Applied Infant Affect Recognition Allen Chang, Lauren Klein, Marcelo R. Rosales, Weiyang Deng, Beth A. Smith and Maja J. Matarić
Speech Behavioral Markers Align on Symptom Factors in Psychological Distress Larry Zhang, Jacek Kolacz, Albert Rizzo, Stefan Scherer and Mohammad Soleymani
ALOE: Active Learning based Opportunistic Experience Sampling for Smartphone Keyboard driven Emotion Self-report Collection Surjya Ghosh, Bivas Mitra and Pradipta De
Romantic and Family Movie Database: Towards Understanding Human Emotion and Relationship via Genre-Dependent Movies Po-Chien Hsu, Jeng-Lin Li and Chi-Chun Lee
Documenting use cases in the affective computing domain using Unified Modeling Language Isabelle Hupont and Emilia Gómez
Investigating the Interplay Between Self-Reported and Bio-Behavioral Measures of Stress: A Pilot Study of Civilian Job Interviews with Military Veterans Ehsanul Haque Nirjhar, Md Nazmus Sakib, Ellen Hagen, Neha Rani, Sharon Lynn Chu, Winfred Arthur, Amir Behzadan and Theodora Chaspari
Multi-corpus Affect Recognition with Emotion Embeddings and Self-Supervised Representations of Speech Sina Alisamir, Fabien Ringeval and François Portet
Handling Missing Data For Sleep Monitoring Systems Shkurta Gashi, Lidia Alecci, Martin Gjoreski, Elena Di Lascio, Abhinav Mehrotra, Mirco Musolesi, Maike E. Debus, Francesca Gasparini and Silvia Santini
Profiling of low back pain patients for the design of a tailored coaching application Florian Debackere, Céline Clavel, Alexandra Roren, Viet-Thi Tran, Galia Snoubra, Yosra Messai, François Rannou, Christelle Nguyen and Jean-Claude Martin
A New Perspective on Smiling and Laughter Detection: Intensity Levels Matter Hugo Bohy, Kevin El Haddad and Thierry Dutoit
Play with Emotion: Affect-Driven RL Matthew Barthet, Ahmed Khalifa, Antonios Liapis and Georgios N. Yannakakis
Using Positive Matching Contrastive Loss with Facial Action Units to mitigate bias in Facial Expression Recognition Varsha Suresh and Desmond Ong
Classifying Laughter using Component Process Model Earl Capistrano, Kristen Ann Raphaelle Espiritu, Marybelle Tandoc, Johanna Lim and Jocelynn Cu
CIAO! A Contrastive Adaptation Mechanism for Facial Expression Recognition Pablo Barros and Alessandra Sciutti
Modeling Emotion-Focused Coping as a Decision Process Nutchanon Yongsatianchot and Stacy Marsella
Advancing the Understanding and Measurement of Workplace Stress in Remote Information Workers from Passive Sensors and Behavioral Data Mehrab Bin Morshed, Javier Hernandez, Daniel McDuff, Jina Suh, Esther Howe, Kael Rowan, Marah Abdin, Gonzalo Ramos, Tracy Tran and Mary Czerwinski
Detecting Depression from Social Media Data as a Multiple-Instance Learning Task Paulo Mann, Elton H. Matsushima and Aline Paes
Interpretable Explainability in Facial Emotion Recognition and Gamification for Data Collection Krist Shingjergji, Deniz Iren, Felix Böttger, Corrie Urlings and Roland Klemke
Language Use in Mother-Adolescent Dyadic Interaction: Preliminary Results Laura Cariola, Saurabh Hinduja, Lisa Sheeber, Nick Allen and Jeffrey Cohn
Computational Empathy Facilitates Human Creative Problem Solving Matthew Groh, Craig Ferguson, Robert Lewis and Rosalind Picard
Pierre Lévy’s Kansei Philosophy As Understood Through Human-Computer Interaction Theories Erik Campano
Ballistic Timing of Smiles is Robust to Context, Gender, Ethnicity, and National Differences Maneesh Bilalpur, Kenneth Goodrich, Saurabh Hinduja and Jeffrey Cohn
Monologue versus Conversation: Differences in Emotion Perception and Acoustic Expressivity Woan-Shiuan Chien, Shreya Upadhyay, Wei-Cheng Lin, Ya-Tse Wu, Bo-Hao Su, Carlos Busso and Chi-Chun Lee
Estimating Personal Model Parameters from Utterances in Model-based Reminiscence Shoki Sakai, Kazuki Itabashi and Junya Morita
Bias Reducing Multitask Learning on Mental Health Prediction Khadija Zanna, Kusha Sridhar, Han Yu and Akane Sano
A Deep Ensemble Approach of Anger Detection from Audio-Textual Conversations Mahjabin Nahar and Mohammed Eunus Ali
Affective Ratings of Nonverbal Vocalizations Produced by Minimally Verbal Individuals: What Do Naive Listeners Perceive? Kristina Johnson, Amanda O’Brien, Ayelet Kershenbaum, Jaya Narain, Simon Radhakrishnan and Rosalind Picard
Online Detection of Attentiveness of Students with Special Needs Khandker Aftarul Islam, Tanzima Hashem, Mohammed Eunus Ali, Tasin Ishmam, Aniruddha Ganguly, Madhusudan Basak, Nusrat Jahan and Sajida Rahman Danny
Exploring Affective Dimension Perception from Bodily Expressions and Electrodermal Activity in Paramedic Simulation Training Surely Akiri, Sanaz Taherzadeh, Vasundhara Misal and Andrea Kleinsmith
Choose or Fuse: Enriching Data Views with Multi-label Emotion Dynamics Xi Laura Cang, Rubia Guerra, Paul Bucci, Laura Rodgers, Bereket Guta, Hailey Mah, Shinmin Hsu, Qianqian Feng, Chuxuan Zhang, Anushka Agrawal and Karon Maclean

Conference format

Main conference
Paper presentations:

In order to maximize engagement given the hybrid format, ACII2022 will be a single-track program, and we have chosen to have all presentations be talks (i.e., we will not have posters).
  • Long Talks are to be not longer than 16 minutes, and there will be 4 mins of Q&A time (total 20 mins).
  • Flash Talks are 5 mins. There will be Q&A time at the end of each Flash Talk session.
All authors, regardless of whether they are intending to attend in-person or virtually, are asked to prepare and submit their slides in advance (Powerpoint; 10MB or less; use of common fonts is recommended to minimize cross-platform issues). This is to facilitate transitions between talks.

Authors planning to give their talks virtually should plan to present synchronously (in other words, giving a “live” talk at the same time — we have tried to account for time-zone differences as much as we could). But if you prefer (for example, in case of any unexpected issues on the day itself), you may choose to prepare a pre-recorded talk.

Here are our recommended recording guidelines:

  • Video file format: mp4
  • Dimensions: minimum 720 pixels in height (landscape)
  • Aspect ratio: 16:9
  • Although not required, we recommend you to show your face during the video.
  • You may use Zoom, Microsoft Teams, OBS, or any similar virtual meeting platforms and screen share for your presentation slides to record your presentation.
  • Title your recording : yourpapernumber_lastnameoffirstauthor.mp4, e.g., 88_Truong.mp4

Submission instructions for videos and slides will be communicated closer to the date.

 Reception: October 18 or 20.

  • Pre-conference events (workshops/challenges, tutorials, doctoral consortium).
  • Workshops/challenges, tutorials,: virtual by default (but can be hybrid depending on the decision of the chairs/organizers of each event), October 17 or 18.

Doctoral Consortium: hybrid, perhaps during the main conference (October 19 – 21), though called “pre-conference”.