The Role of Human-centred Design in AI projects

Frederik Ueberschär
7 min readMar 22, 2021

--

Introduction

The notion of artificial intelligence (AI) and machine learning (ML) seems to be everywhere, with new stories and beneficial use cases popping up every day. Advanced algorithms are working in the background to keep us safe from banking fraud and email spam or recommend personalized content for our entertainment. Prominent examples of these providing systems are Netflix, Spotify, Amazon, and other web services — Medium included.

The list of current and future beneficial use cases of AI just keeps on growing. In September, Wired published an article about an upgrade of the US National Guard’s Fire-Mapping Drones with AI technology that promises to shorten the critical response time from the detection to the control of wildfires. Only months before, the New York Times reported about the use of Deepfake technology in a new HBO documentary to keep the identity of the persecuted homosexual interviewees safe while still preserving their humanity for storytelling purposes. With both — arguably beneficial — examples, it isn’t hard to imagine how they could also be utilized for more sinister uses, too.

AI or some kind of advanced machine learning can be found in basically every smartphone or online service nowadays, providing a touchpoint for a direct and noticeable impact on their users. With the choice to utilize this technology in everyday consumer products comes a new wave of open design decisions to determine how these automated systems should behave, how much they should be controllable by the user, what they shouldn’t be allowed to do. To address these issues, human-centered design methodologies might be able to translate human values into design restrictions, solutions, or principles. Therefore we aimed to understand the role of human-centered designers in the AI development process.

What we set to find out:

  1. What is the role of Human-Centred Design in the design of AI systems, now and in the future?
  2. How are the success-metrics of an AI system defined from a designer’s perspective?
  3. To what extent is human well-being considered when designing an AI system?
  4. To what extent do HCDers feel morally responsible for the implications of the designed AI system?
  5. How are the underlying values of the AI system defined and embodied?

The approach

We conducted a series of interviews with human-centered designers and UX practitioners who are working on products with an active AI component. The interviewees’ backgrounds ranged from working at small startups up to the top of Fortune 500 companies. We transcribed the interviews and analyzed them for similarities and divergences concerning our questions. Some of our preliminary insights are described below in the results section.

Results

1. Designers require a seat at the table to represent, fight and advocate for the users and their rights

Over the past decades, designers already managed to get a seat at the table in many institutions. According to our participants, they need to be involved in the development of AI products as well to make sure that the user problem is well understood and that the AI product actually meets the users’ needs.

Furthermore, designers should advocate and push for the values of transparency and user agency. As human-centered practitioners, they are the voice of the user at the table and should make sure that their needs and concerns are met. Of course, designers shouldn’t be the only ones caring about those issues, but the same be done by managers and engineers alike, but as designers focus on the user and their wellbeing, it is only natural that the fight would start with them.

2. Designing an AI product is slightly different, but not that different from designing a normal product.

According to our participants, the design of an AI product is pretty similar to that of a ‘normal’ product. Most of a designer’s toolkit still applies, as the goal and success metric for products are still about user satisfaction and about solving issues for them.

In that case, AI becomes a potential tool for advanced automation and removing points of friction for the user. For designers, this means (like with every tool) that they need to be aware of an AI’s capabilities and limitations.

The key difference between an AI and a non-AI product is its evolutionary aspect. With new data to train on, its capabilities might be limited at the beginning and increase with new incoming data over time — like for instance in Spotify’s Discover Weekly algorithm that produces sometimes eerily fitting recommendations by analyzing your listening habits over time. In other instances does the training of the AI require active user involvement and the design needs to incorporate those functionalities. Results might not be perfect in the beginning but increase drastically over time. Communicating this to the user in a transparent manner is crucial. In this communication, it is often more about why something works in a specific way rather than how something works in technical terms.

3. AI is not a magical solution to all problems, but just another tool or infrastructure to achieve a means.

Different from how it is often portrayed in pop culture, AI is not some kind of magical solution, but rather an underlying, complex infrastructure that can deal with loads of data. It is often hidden from direct user interaction, and sometimes its workings are even beyond the understanding of its creators. Like with every tool, designers need to be aware of its capabilities, limitations, and pitfalls in order to design with and around them. In the end, the goal of most products is still user satisfaction and (hopefully) their wellbeing.

4. Designers don’t need special education or deep AI knowledge to work in the field.

Good news for every designer who wants to get started in the field of AI products: Our participants agreed that special education is not needed in order to get started. However, foundational knowledge about possibilities, limitations, and potential pitfalls of AI is certainly helpful. What is truly needed from designers is the ability to work together with engineers to come to a meaningful outcome — more discussed in the next point.

One participant compared it to the digital and then the mobile revolution that happened some decades ago. It required designers to familiarize themselves with the technology and develop a new design language for the emerging products; something similar is happening with AI at the moment.

5. Instead of special education, designers need to work in a multidisciplinary team and recognize and make use of other’s expertise.

How designers are the experts on user needs and how to design for them and are continuing to do so with AI products, engineers are the experts on the underlying technology. Being able to recognize the expertise and listen to what others have to say, together with the willingness and ability to ask many questions will be helpful to get started in the field of AI. This can (and should) be supported by a company’s on-boarding process and a culture that allows for and encourages questions — something that does not seem to be the case everywhere, sadly.

6. We should use AI technology to its fullest potential, while always thinking of the worst-case scenarios.

AI can help us approach technical and societal issues, and improve humanity and our environment. However, with AI products more than ever, is it important to think about: What is the worst that could happen? Due to its capabilities, mistakes in the design of an AI product can have vast consequences, and, how one of our participants phrased it, here “it does not matter if it happened only once: your design allowed for that. It’s not a bug; it’s a feature until you fix it.”.

7. Defining values and moral responsibility

The notion of underlying values is often discussed in AI literature. When asking our participants how this is done in practice, we got a variety of different responses. Some are thinking very deeply about the underlying values of their products, like transparency, the ability to explain decisions, and a user’s agency, while others rather derived them indirectly from their team culture, where the values you set for yourself hopefully also influence the product in the end. Those stand in contrast to some startup mentalities, where the “go go go” spirit often does not leave the time or headspace to actually think about a product’s value.

However, most agreed that thinking about the underlying values of an AI product is important and should be fostered and encouraged — especially since governmental regulations, although encouraged and appreciated by all of them, was met with limited trust to actually develop meaningful and evolving guidelines on how to design AI systems.

Summary & next steps

In our small-scale research study, we set out to shine more light on the role of human-centered design in contemporary AI products. Through conversations with five HCD-practitioners working in the industry, we found out that the role of HCD in AI products is still very similar to non-AI products. The main differences are an increased need of designing for transparency, consideration for the evolutionary nature of an AI product, and the awareness of the technology's capabilities, limitations, and pitfalls.

Beyond that, the role of HCD remains the same, to work together with engineers and management to create a product, make it fit the user’s needs, and advocate for user’s rights. In this tech-driven space, HCD and its practitioners might be needed more than ever.

Although our insights are derived directly from interviews with practitioners working in the industry, we are aware of the limitations of our study. A small-scale, qualitative study like ours can only cover a limited amount of perspectives and insights in a novel field with diverse opinions and approaches. We, therefore, suggest a follow-up study of a widespread survey targeted at HCD practitioners working on AI products in the industry to diversify its participants and extend our insights on the set-out questions from a quantitative perspective.

Research and article by Frederik Ueberschär and Willem van der Maden at the Delft University of Technology, with support from Derek Lomas and the Vibe Research Labs

--

--

Frederik Ueberschär

Designer, Maker and Researcher, currently located in The Netherlands.