Designing for Trust in an AI-First World
At one of our recent studio Vision Days© at O Street we talked about how each of us yearned to keep up with the ebbs and flows of our industry, partly by attending events and training sessions. We divvied it up between us to cover a spectrum of design knowledge.
Now at this stage I should have opted for colour theory or typography, but no, I chirped up and said UX design, setting myself up for a world of hurt. I’m joking—sort of—but really am interested in how this particular design specialism has started to become more and more important in our digital world.
That UX? It’s tough stuff! I attended my first event on the subject at London’s Google HQ this week (if you are gonna get info best go for the top ey!?). The Google UX team hosted a small session to kick of London Design Week regarding ‘Trust in an AI-first world’.
Trust and computers has been an issue for a while. Years ago it was trust in an e-commerce world: would users put their credit card details on a website? Even without clever AI systems big brands had the ability to make quite intrusive forays into our personal lives—remember the story from 2012 about US retail brand Target finding out about a young teenager’s pregnancy before her own father?
Now that the true value of the data that can be gleaned from our digital footprint is being realised, things have gone up a level. We have: more immersive computer experiences; more powerful AI systems that have the ability to track our every move; and brands savvy enough to convert that insight into hard profits. Should we trust them?
To help answer that question, here’s our experts:
The event at Google was hosted by Matt Jones (Design Director at Google Research & Machine Intelligence). The expert panel included: Sarah Gold (Founder and Design Director of If); Priya Prakash (founder of Design for Social Change); Tom Taylor (Chief Product Engineer at Co-Op Digital); and Rachel Coldicutt (CEO at DotEveryone).
And here’s what they had to say (forgive my paraphrasing):
AI machine learning sounds like ‘magic sprinkles’ but behind that people have to work and build good things. We need to own up to this and start pulling on the threads before they start pulling on us. Neural networks are already in our smartphones—what opportunity does this offer? Can we gather and store our data locally? It’s a huge shift in thinking… Google are trying to change the underlying logic, moving the processing and acting closer to the person actually doing the thing.
—Matt Jones
As designers we need to show a different way that things can be. Making and testing is quite a powerful way to do this, to learn, then share that learning openly. But how can we safely test and learn? What data you use is the tricky bit, ethics of using raw real data. Can you use dummy data? Small groups?
—Sarah Gold
Amazon Echo doesn’t understand my 4yo with most things, but when he asked ‘play Everything is Awesome’ it worked as the machine had learned from so many other toddlers. But could Amazon be even more responsible with that power? I sometimes wish Echo only did things if you say ‘please’ first, it has the ability to teach manners to my children, how to behave well.
—Rachel Coldicutt
People are totally ambivalent about AI. They are not interested when it works, but when it goes wrong they want to know who to speak to. People are already going to get busier, so we pretty much know that they are not going to track every bit if data about themselves that is recorded. However, we do have a responsibility to make the important/sensitive information easy to access. There will have to be a level of openness from the AI industry to the wider public.
—Priya Prakash
The password strength green bar, an amazing visual tool in helping build trust. (Tip of the public understanding of an AI trust infrastructure), but the green bar is just one micro interaction.
—Tom Taylor
Conclusion
By the very nature of the topic, much of this discussion brought about more questions than answers. It’s realising this is an important topic—and talking about it—that is key. My takeaways:
– Our behavioural data has been valuable to brands for years, and now that AI offers the ability to process it in larger and more complex ways, it’s even more valuable.
– Not only do most people not understand this subject, we’re also not talking about this as a society—we should be!
– Subjects like history and philosophy (how to behave as a decent human being in an ever changing world) will be more important than coding or IT studies in equipping our children to cope with the new digital age.
– The Google UX team is starting a great conversation here.
– Manners maketh humans…
Also, what I do know is that as a leading graphic design studio we can’t ignore the amazing pool of knowledge UX designers are gathering at the coal face of technological advances.
It’s not only important for us, it’s important for society to understand these things, especially when it comes to trusting digital data collection. There are certainly brands that are collecting our data whom we should trust, but there are also plenty to be wary of. A good graphic designer that understands this stuff is the exact kind of person that can cut all the complicated shit out of the discussion and communicate what really matters to real, busy everyday folk.
So if any of you want to talk to me, give me a call. I’m sure your bots can find my number.
– David Freer