Why designing AI for humans requires "productive discomfort"

Why designing AI for humans requires “productive discomfort”

Hand clicking artificial intelligence chart

d3sign/Getty Images

When the consumer version of Google Glass hit the scene in 2014, it was heralded as the start of a new era of human-computer interfaces.

People could go about their day having access to all the information they need, right in front of their eyes.

Eight years later, how many people do you see walking around with connected glasses?

The lesson here, as described by Stanford professor Elizabeth Gerber, is that “technology can only reach people if they want to.”

Speaking at Stanford’s recent Human-Centered Artificial Intelligence Fall Conference, she noted that “we didn’t want to wear Google Glass because they invaded our privacy. We didn’t want to because that it changed human interaction. Just remember Google Glass when you’re thinking about what AI can do – people have to want it.” (For a full overview of the entire conference, see Shana Lynch’s article on the HAI site.)

Designing “AI that people want is as important as making sure it works,” Gerber continued. Another lesson learned was the adoption of AI-powered tutors on Zoom during COVID-induced school closures – which served to distract children from subjects. The same goes for workers who have to work with AI-driven systems, she added.

Also: The problem with AI: it’s not you, it’s the data

Designing a human-centric AI involves more interaction with people in the business and is often hard work to get everyone on board and on the same page about useful systems and useful to the business. “Having the right people in the room does not guarantee consensus and, in fact, results often come from disagreements and discomfort. “We have to manage and look towards productive discomfort,” said Genevieve Bell, professor at the ‘Australian National University and a speaker at the HAI event “How do you teach people to be good in a place where they feel uncomfortable?”

It may even mean that no AI is better than some AI. “Sometimes no AI is the best AI,” Gerber pointed out. “Remember that when you’re designing, taking this human-centric approach, and designing for people’s work, sometimes you just need a script. Instead of taking an AI-centric approach, take a human-centric approach. Design and test iteratively with people to increase their job satisfaction and engagement.”

Perhaps counterintuitively, when designing AI, it may be best to avoid attempting to make AI more human-like, such as using natural language processing for conversational interfaces. In the process, system functionality that helps make people more productive can be diluted or completely lost. “Look what happens when someone who doesn’t understand designs the prompt system,” said Ben Shneiderman, a professor at the University of Maryland. “Why is it a conversational thing? Why is it a natural language interface, when it’s a great place to design a structured prompt that would have the various components, designed according to the semantics of prompt training ?”

Also: The real purpose of AI may no longer be intelligence

The thinking “that human-computer interaction should be based on human-human interaction is suboptimal — it’s bad design,” Shneiderman continued. “Human-human interaction is not the best model. We have better ways of designing, and changing from natural language interaction is a no-brainer. There are many ways to move past this model and reframe the idea to design tools – – super tools, telebots and active devices.”.

“We don’t know how to design AI systems to have a positive impact on humans,” said James Landay, vice director of Stanford HAI and host of the conference. “There is a better way to design AI.”

The following recommendations emerged from the conference:

  • Reframing and redefining human-centered design: Panelists offered a new definition of human-centric AI, which emphasizes the need for systems that improve human lives and challenges the problematic incentives that currently drive the creation of AI tools. AI. Current efforts are based on the “denial of human expertise,” Shneiderman said. “Yes, humans make mistakes, but they are also remarkable for their creativity and their capacity for expertise. What we really need to do is build machines that make smart people smarter. We want to improve their abilities We understand that in a lot of designs by having boundaries, guardrails, interlocks, these are all things that have gone into the human factors literature for 70 years – about how we avoid breakdowns. , your self-cleaning oven, once the temperature is over 600 degrees Fahrenheit , you can’t open the door, okay? And that’s built into a lot of technology. That’s design at work. It’s the right kind of design we need to build more of, and we need to improve human expertise while reducing the chance of error.”
  • Look for multiple viewpoints: It requires multidisciplinary teams made up of workers, managers, software designers and others with conflicting perspectives, said Carnegie Mellon University professor Jodi Forlizzi. Additionally, according to Saleema Amershi, senior director of research at Microsoft Research, “We have to reframe some of our processes, even though there are people like designers or people who understand human-centered principles. Many of these people are not in the room to make the decisions about what will be built. We need to rethink our whole, our process, and have these people that they work with the technologists that work with the AI ​​people from the start and from the beginning.”

Also: Artificial intelligence: 5 innovative applications that could change everything

  • Rethinking AI Success Metrics: We most often ask the question what can these models do, but we really have to ask ourselves what can people do with these models?” Amershi said. “We are currently measuring the AI ​​by optimizing the accuracy , but accuracy is not the only measure of value. Designing for human-centric AI requires human-centric metrics.”
  • Keep humans in the loop — and easily replaceable AI: “We want AI models that are understandable, predictable, and controllable,” Shneiderman said. “It’s still the enduring notion, that you’re in charge and you can ignore it. We come to depend on things that are reliable, safe and trustworthy, such as our cameras adjusting shutter, focus and the color balance. But if we see the focus is bad, we can adjust that. The mental model should be that users have the control panel through which they can get what they want, and then the system gives them gives insights, some offer opportunities, but they can override.”

#designing #humans #requires #productive #discomfort

Leave a Comment

Your email address will not be published. Required fields are marked *