The Role of Human-centred Design in AI projects

Introduction

The list of current and future beneficial use cases of AI just keeps on growing. In September, Wired published an article about an upgrade of the US National Guard’s Fire-Mapping Drones with AI technology that promises to shorten the critical response time from the detection to the control of wildfires. Only months before, the New York Times reported about the use of Deepfake technology in a new HBO documentary to keep the identity of the persecuted homosexual interviewees safe while still preserving their humanity for storytelling purposes. With both — arguably beneficial — examples, it isn’t hard to imagine how they could also be utilized for more sinister uses, too.

AI or some kind of advanced machine learning can be found in basically every smartphone or online service nowadays, providing a touchpoint for a direct and noticeable impact on their users. With the choice to utilize this technology in everyday consumer products comes a new wave of open design decisions to determine how these automated systems should behave, how much they should be controllable by the user, what they shouldn’t be allowed to do. To address these issues, human-centered design methodologies might be able to translate human values into design restrictions, solutions, or principles. Therefore we aimed to understand the role of human-centered designers in the AI development process.

What we set to find out:

  1. What is the role of Human-Centred Design in the design of AI systems, now and in the future?
  2. How are the success-metrics of an AI system defined from a designer’s perspective?
  3. To what extent is human well-being considered when designing an AI system?
  4. To what extent do HCDers feel morally responsible for the implications of the designed AI system?
  5. How are the underlying values of the AI system defined and embodied?

The approach

Results

1. Designers require a seat at the table to represent, fight and advocate for the users and their rights

Furthermore, designers should advocate and push for the values of transparency and user agency. As human-centered practitioners, they are the voice of the user at the table and should make sure that their needs and concerns are met. Of course, designers shouldn’t be the only ones caring about those issues, but the same be done by managers and engineers alike, but as designers focus on the user and their wellbeing, it is only natural that the fight would start with them.

2. Designing an AI product is slightly different, but not that different from designing a normal product.

In that case, AI becomes a potential tool for advanced automation and removing points of friction for the user. For designers, this means (like with every tool) that they need to be aware of an AI’s capabilities and limitations.

The key difference between an AI and a non-AI product is its evolutionary aspect. With new data to train on, its capabilities might be limited at the beginning and increase with new incoming data over time — like for instance in Spotify’s Discover Weekly algorithm that produces sometimes eerily fitting recommendations by analyzing your listening habits over time. In other instances does the training of the AI require active user involvement and the design needs to incorporate those functionalities. Results might not be perfect in the beginning but increase drastically over time. Communicating this to the user in a transparent manner is crucial. In this communication, it is often more about why something works in a specific way rather than how something works in technical terms.

3. AI is not a magical solution to all problems, but just another tool or infrastructure to achieve a means.

4. Designers don’t need special education or deep AI knowledge to work in the field.

One participant compared it to the digital and then the mobile revolution that happened some decades ago. It required designers to familiarize themselves with the technology and develop a new design language for the emerging products; something similar is happening with AI at the moment.

5. Instead of special education, designers need to work in a multidisciplinary team and recognize and make use of other’s expertise.

6. We should use AI technology to its fullest potential, while always thinking of the worst-case scenarios.

7. Defining values and moral responsibility

However, most agreed that thinking about the underlying values of an AI product is important and should be fostered and encouraged — especially since governmental regulations, although encouraged and appreciated by all of them, was met with limited trust to actually develop meaningful and evolving guidelines on how to design AI systems.

Summary & next steps

Beyond that, the role of HCD remains the same, to work together with engineers and management to create a product, make it fit the user’s needs, and advocate for user’s rights. In this tech-driven space, HCD and its practitioners might be needed more than ever.

Although our insights are derived directly from interviews with practitioners working in the industry, we are aware of the limitations of our study. A small-scale, qualitative study like ours can only cover a limited amount of perspectives and insights in a novel field with diverse opinions and approaches. We, therefore, suggest a follow-up study of a widespread survey targeted at HCD practitioners working on AI products in the industry to diversify its participants and extend our insights on the set-out questions from a quantitative perspective.

Research and article by Frederik Ueberschär and Willem van der Maden at the Delft University of Technology, with support from Derek Lomas and the Vibe Research Labs

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Frederik Ueberschär

Designer, Maker and Researcher, currently located in The Netherlands.