Skip to Main Content

DRSelects: IAC Member Emily Corrigan-Kavanagh on Design for People-centred AI

DRSelects: IAC Member Emily Corrigan-Kavanagh on Design for People-centred AI

Please introduce yourself, your role in the DRS and your research.

I am a Surrey Future Fellow, a Surrey AI Fellow and Fellow of the Centre of Excellence on Ageing, based in the People-Centred AI Institute (PAI) at University of Surrey. I was recently elected as a new member of the International Advisory Council (IAC) in June this year. In this role, I am making it my mission to ensure the Design Research Society (DRS) becomes a springboard and catalyst for a new design research area called “Design for People-centred AI”. “Design for People-Centred AI” aims to build global communities investigating how design research, such as participatory design and co-design methods, can be used to collaboratively develop AI for societal wellbeing with end-users. Within this area, I am currently focusing on “Designing AI for Home Wellbeing” to explore how AI could be purposely designed to support home wellbeing, such as the basic and psychological needs support by the home. I am uniquely positioned to lead these research areas within DRS with a PhD in “Designing for Home Wellbeing”, and six years’ experience researching new technologies through design research, such as sound sensing AI for home and workplace wellbeing.

Could you talk about the initiatives you’re involved with in the DRS and any upcoming events you’d like to share?

I am currently in the process of establishing a new Special Interest Group (SIG) called “Design for People-Centred AI” that will align and collaborate with other SIGS such as SIGWELL and Inclusive Design. I also plan to use my AI networks at PAI to contribute to decision-making around AI and design, such as the design of future conference theme tracks and other related initiatives. Additionally, I hope to support the working group for Equality, Diversity, and Inclusion to promote the creation of diverse, equitable and inclusive practices in both design research and AI research. In the past, I have run public and academic events exploring related topics such as “Research and Innovation in Technologies for Home Wellbeing” and “Envisioning Future Homes for Wellbeing”. Going forward, I will look to run another event converging scholarship related to “Designing AI for Home Wellbeing” to continue to grow and encourage collaborative research between AI experts and design researchers and to promote inclusive practices.

What do you see as the benefits of being involved with the DRS and how can those interested become more involved in the Society?

By being involved with the DRS, I am afforded many opportunities to interact with design researchers from around the world and expand my design research network through engagement with upcoming design events, different SIGS and International Advisory Council (IAC) meetings. The DRS also offers a platform for me to explore ideas for future design events supportive of the DRS aims and led by me, such as those promoting “Design for People-Centred AI”, that could be advertised through the DRS networks. I would encourage current members who want to gain the full benefit of their membership to join relevant SIGS, take part in relevant online discussions, collaborate on SIG specific organised events, attend and submit papers to affiliated conferences, attend open IAC meetings, as well as volunteer to support the biennial DRS conference.

 DRS Digital Library Picks

Hwang, E., and Lim, Y. (2020) Tuning into the Sound: Discovering Motivational Enablers for Self-Therapy Design, in Boess, S., Cheung, M. and Cain, R. (eds.), Synergy - DRS International Conference 2020, 11-14 August, Held online. https://doi.org/10.21606/drs.2020.287

Acknowledging the lack of effective design enablers for self-therapeutic experiences to counteract the voluntary hyper productivity of modern-day life, this paper explores the use of sound as a design material to support self-therapy as well as enablers that can bring self-therapeutic value to design. I was particularly intrigued to read this paper as I have previous experience in research exploring sound sensing AI for wellbeing where we asked participants to listen to routine sounds in their home and workplaces. Indeed, we found this research activity to show self-therapy potential as it encouraged participants to practice a mindfulness towards emerging sounds. The paper proceeds by presenting the methodology and findings for a novel “Tune-In” diary study, based on Gaver et al.’s (1999) cultural probe approach, to understand what process, means and quality aspects of active reflection on mundane sounds can promote self-therapeutic experience. Overall, the paper offers some inspirational prompters to support future design for self-therapeutic experiences and demonstrates everyday sounds as promising design tool in this.

Kelliher, A., Barry, B., Berzowska, J., O'Murchu, N., and Smeaton, A. (2018) Conversation: Beyond black boxes: tackling artificial intelligence as a design material, in Storni, C., Leahy, K., McMahon, M., Lloyd, P. and Bohemia, E. (eds.), Design as a catalyst for change - DRS International Conference 2018, 25-28 June, Limerick, Ireland. https://doi.org/10.21606/drs.2018.784

This Conversation resonated with my research interests in designing for people-centred AI as it is invites participants to view AI as a formative “design material” that can be consciously shaped to support societal wellbeing and enhance human experiences. It begins this exploration with five panellists from diverse backgrounds introducing their related expertise and experiences such as in home-based stroke rehabilitation, digital mental health, reminiscence therapy and curatorial practice. The main question, “How can we enhance and evolve the intelligence, abilities, and experience of all human actors in AI supported systems?”, then invites wider discussions from participants. An outcome of particular interest was the agreed upon need for “situated communal AI knowledge systems” that could support distributed local centres of access, control and accountability to overcome widespread public concerns about safety and security of data usage in AI systems. I know from running similar events myself that concerns of privacy and surveillance continue to permeate and disrupt people’s willingness to embrace AI powered technologies.

Nicenboim, I., Giaccardi, E., and Redström, J. (2022) From explanations to shared understandings of AI, in Lockton, D., Lenzi, S., Hekkert, P., Oak, A., Sádaba, J., Lloyd, P. (eds.), DRS2022: Bilbao, 25 June - 3 July, Bilbao, Spain. https://doi.org/10.21606/drs.2022.773

This paper starts to tackle some of the key issues around explainability of AI and offers some encouraging design strategies to support the diverse learning and understanding needs of various users in different contexts. Rather than explore ways to describe the technical workings of an AI system, new approach is proposed to enable people to understand if decisions made by that system can be trusted. Such an approach calls for people and AI systems to be treated as active agents in everyday experiences and to comprehend how such experiences create shared understandings.  Using a more-than-human design perspective, the paper presents two intriguing design strategies for moving beyond explanations to shared understandings such as “looking across AI” and “exposing AI failures”. “Looking across AI” involves an attempt to map and visualise the multiple interactions between human and non-humans to contextualise why certain AI interactions come about. “Exposing AI failures” includes illustrating the limitations of AI, ultimately allowing users to explore alternative interactions to support better user experiences.  Moving beyond the user as passive and neutral participant in explainable AI, the presented design strategies show promise in situating users as active agents in the conceptualisation and understanding of AI.

Auernhammer, J. (2020) Human-centered AI: The role of Human-centered Design Research in the development of AI, in Boess, S., Cheung, M. and Cain, R. (eds.), Synergy - DRS International Conference 2020, 11-14 August, Held online. https://doi.org/10.21606/drs.2020.282

This paper aligns beautifully with my research interests in collaboratively designing people-centred AI by arguing the need for a pan-disciplinary design approach for driving forward human-centred AI. It discusses eight human-centred design (HCD) approaches and how they might be applied to support human-centred AI development, providing a fantastic resource for design researchers and other experts interested in developing this growing field. Understanding the multiplicity of ethical perspectives that can emerge when addressing AI from different HCD approaches, this paper’s presentation of such supports future researchers in selecting suitable strategies. The paper finishes with a representation of how a pan-disciplinary design approach might operate to inspire future pan-disciplinary research. It is a must read for any design researcher or related disciplinary expert pursuing research in human-centred AI.

Harbers, M., and Overdiek, A. (2022) Towards a living lab for responsible applied AI, in Lockton, D., Lenzi, S., Hekkert, P., Oak, A., Sádaba, J., Lloyd, P. (eds.), DRS2022: Bilbao, 25 June - 3 July, Bilbao, Spain. https://doi.org/10.21606/drs.2022.422

Moving beyond theorical notions of ethical AI, this paper starts to explore what ethical AI means in practice. Specifically, this paper investigates how the ethical dimensions of AI can be practically investigated in a living lab setting as an alternative to traditional AI ethics research that focuses on providing overarching guiding principles open to different interpretations. The term “Responsible Applied AI” (RAAI) is coined to describe how AI can be applied ethically in real world situations and five requirements for building a successful living lab for such purposes are put forward. I particularly enjoyed reading this paper as it highlights the socially constructed, temporal and contextual nature of what we deemed to be fair and ethical, highlighting the need for RAAI. Furthermore, it emphasises that solutions for ethical AI may not always be technical and could indeed be social interventions that evolve with the AI system as it learns and changes over time.


 September 17, 2024