Tutorial: HUMANE AI: Vision for Inclusive Intelligent Interaction
Tutorial date/time: 2.00pm-6.00pm, 18th October 2022 (Local time in Japan)
Mode of tutorial: Virtual (Zoom, for ACII in-person audiences, a room will be set up for remote attendance)
Prof Cathy Holloway, UCL Interaction Centre & Global Disability Innovation Hub
Dr Aneesha Singh, UCL Interaction Centre
Mr. Sahan Bulathwela, UCL Computer Science
Dr. Temitayo Olugbade, UCL Interaction Centre
Ms Jamie Danemayer, UCL Interaction Centre & Global Disability Innovation Hub
Prof. Nadia Bianchi-Berthouze, UCL Interaction Centre
Prof John Shawe-Taylor, UCL Computer Science
Creating socially just Artificial Intelligence (AI) and Machine Learning (ML) for the inclusion of all people is critical for the future of humanity, and the sustainability of the planet. Applications for health and education permeate all spheres of life and increasingly use AI/ML to power them. These services are often mediated by increasingly intelligent agents and systems. A cross-cutting theme in many solutions is the personalisation of experience powered by ML/AI. Personalisation is an excellent example of the opportunity and challenges of AI. Such an approach has been used to create Open Learner Models (OLM). OLM facilitates self-regulated learning in learners  while triggering reflection, planning and other meta-cognitive activities while also communicating to stakeholders beyond teachers . Personalisation has also demonstrated value to the healthcare system, and to people with chronic mental and physical conditions. ,  show a human-centric AI approach can help reduce the burden on much stretched healthcare systems. Furthermore, individual’s needs, which are personal (i.e. each individual having a distinct combination of symptoms and needs) can be incorporated allowing a level of user- expert experience to drive care. These demonstrate the opportunity for humane AI.
The humane AI program defines humane AI as “ trustworthy, ethical AI that enhances human capabilities and empowers citizens and society to effectively deal with the challenges of an interconnected globalised world” . However, the examples above are all taken from the Global North. There is the obvious challenge to this – a lack of data sets and solutions from the Global South which if uncorrected will increase inequality globally. There is also the societal bias of developing solutions based on social norms of individualism which pervades many Global North solutions. In the Global South for example technologies are used within a much richer interdependence within elements of social systems. This type of social and human infrastructure is hitherto not well (if at all) explored within AI/ML.
The rise in humane AI technologies necessitates the development of an understanding of how they are applied in specific contexts and designed to be inclusive. The challenges in the use of AI/ML algorithms, such as privacy and security concerns, biases, unfairness, and lack of cultural awareness, can affect certain populations in different ways, marginalising them. Therefore, this tutorial will provide updates on the state-of-the-art of humane AI within the health and education sectors taking an interdisciplinary and socially just perspective; share experiences and challenges of developing inclusive and context specific AI/ML health and social wellbeing technologies (through case studies) and will work towards developing design methods to engage in the re-envisioning of what it means for AI/ML technologies to be inclusive and specific to the context – not just replicate the state of the art in the Global North. Our agenda is to introduce the audience to cross-research area dialogues and collaborations, focussing on:
Session I: Introduction to Humane AI – We will explore the topics of humane AI alongside marginalisation of people within technology design and use. In this context we use the term “marginalised” when referring to people to groups who are marginalised in certain technological contexts, even if they are not part of what we commonly describe as marginalised groups. We unpack the idea of empowerment, which is often used to define advances in areas such as health and education for marginalised people. However, empowerment approaches can presuppose a disempowered individual., whilst at the same time assume all individuals wish to be empowered [4, p. 308]. These concepts will be explored through the ideal of personalisation of solutions.
Session II: Case studies (Case study 1: Education & Information Retrieval) – We explore education and information retrieval with the example thread of individualised approaches. The increasing volumes of information can be made more manageable and consumable for humans using ML –, Learners are frequently battling information overload. In a sea of information and choices of sources, the right content should be presented to the right person at the right time. AI and ML can help to summarise long documents to produce humanly intuitive narratives of information ,  or summarise long documents , , augment content with additional enrichments such as speech-to-text transcription or explaining an image/figure . These technologies offer opportunities for making information more readily accessible to marginalised communities across cultures, languages, abilities, and geographies. To demonstrate the opportunities, we demonstrate recorded online demonstrations of tools.
Session III: Challenges, Opportunities and Applications (Case Study 2: Health) – Humane AI brings opportunities and challenges. Within health ML/AI can be used for the design of new technologies, to make decisions, and to better understand trends. We explore health from two perspectives: the case of new technologies for disabled or chronically ill people, and the role of AI/ML in large health in population health data sets.
Structure and Contents:
Session I: Introduction to Humane AI (50 mins – standard lecture + 10 min break)
- Humane AI
- Individualisation, empowerment, and community
Session II: Case study Education & Information Retrieval (30 mins / Lecture combined with hands-on + 10 min break)
- Online pre-recorded demos for remote attendees:
- X5Learn: http://x5learn.org/
- Wikifier: https://wikifier.org
- IFacetSum: https://biu-nlp.github.io/iFACETSUM/WebApp/client/
Session III: Challenges, Opportunities and Applications – Case Study Health (80 mins / normal lecture and group discussion)
- Exploring health from two perspectives – the individual with Chronic pain seeking information to
self-manage their condition and the policy maker seeking information to inform healthcare
- Discussion on how the “individual” could be defined
- Discussion on adaptable humane AI which could can adapt for cultural variables
Tutorial materials: TBC
Cathy Holloway (UCL): firstname.lastname@example.org
General enquiries to ACII2022 Tutorial Chair:
Youngjun Cho (UCL): email@example.com
Ruud Hortensius (Universiteit Utrecht): firstname.lastname@example.org