Tutorial date/time
September 10 from 13:30 – 16:30
Room E14-244
Tutorial presenter(s)
Andreas Häuselmann, Center for Law and Digital Technologies, Leiden University, The Netherlands
Deniz Iren, Department of Information Science, Open Universiteit, The Netherlands
Bhoomika Agarwal, Department of Information Science, Open Universiteit, The Netherlands
Tutorial description
The European Union is currently negotiating the AI Act, a legislative initiative aimed at establishing a comprehensive and standardized framework for governing artificial intelligence. Proposed by the European Commission in April 2021, the latest amendment was made in June2023 by the European Parliament (‘AI Act proposal’). This proposal outlines a risk-based approach to classifying various AI practices into three categories: unacceptable-risk, high-risk, and low-risk. Activities falling under the unacceptable-risk category are strictly prohibited. For instance, such activities include the use of AI systems inferring emotions of natural persons in the context of law enforcement, border management, workplace and education, AI system deploying subliminal and manipulative or deceptive techniques, the exploitation of vulnerabilities among specific groups,, AI-driven social scoring systems, and remote biometric identification for law enforcement purposes.
The high-risk category encompasses systems and practices that have the potential to cause harm to individuals’ health, safety, or impact their fundamental rights. They are allowed, but are subject to severe compliance requirements.The AI Act proposal may have significant implications for the field of affective computing research and practice. Firstly, it establishes a definition of emotion recognition systems. The latter refers to “an AI system that aims to identify or infer emotions, thoughts, states of mind, or intentions of individuals or groups based on their biometric and biometric-based data”.
Secondly, the AI Act proposal emphasizes concerns and risks related to emotion recognition systems. It acknowledges that emotion expressions and perceptions can vary across cultures and contexts. Furthermore, the AI Act proposal mentions the following ‘shortcomings’: limited reliability, lack of specificity and limited generalisability. According to the AI Act proposal, these shortcomings could lead to major risks for abuse.
Lastly, the AI Act proposal contains transparency obligations. Providers and deployers of emotion recognition systems must comply with specific transparency obligations. Also, high risk systems are subject to a fundamental rights impact assessment.
Structure and Contents
Part 1: Presentation about the AI Act proposal discussing the most relevant provisions for AC community and highlighting potential impacts on affective computing research and practice (60-90 mins):
Part 2: Interactive session (120-150 mins).
Set up breakout groups. Elicit participants’ perceptions, concerns, and proposed mitigation strategies. Participants will match the risks identified by the AI Act proposal with the risks reported by the ACII community. For the latter, we will provide the participants with a thematic analysis of the ethical impact statements of 70 papers that are accepted to be presented at the ACII conference. During the group discussions, Andreas will be available to answer any questions and provide clarifications regarding the legal text. Deniz will be facilitating the discussions. Finally, groups will shortly present their findings and highlight potential mismatches between the risks identified by the AI Act proposal and the risks reported by the ACII community (based on ethical impact statements).
Tutorial materials
- TBD
Contact
a.n.hauselmann@law.leidenuniv.nl
General enquiries to ACII2023 Tutorial Chairs:
Emily Mower Provost (University of Michigan): emilykmp@umich.edu Albert Ali Salah (Universiteit Utrecht): a.a.salah@uu.nl