The Success of Conversational Interfaces Relies on Designers
In order to talk about something, we also need to create something to look at. What does artificial intelligence look like and how does it behave in different situations? And probably even more importantly: Does it even have to be visualized?
Here’s a fun fact:
Today, one in five Google search queries on Android-devices are now made by voice.
If you ask us, that number is only going to rise.
The fact that we are able to produce 140 words per minute when we speak, and in the same amount of time only type 40 words, is probably the clearest reason why conversational interfaces in the future could be voice-based only and not relying on typing and reading as we do today.
But if something doesn’t have a face, if we can’t look at it as a concept or a product—how do we then talk about it?
In order to talk about something, we also need to create something to look at. Give it a name, give it a face.
Or do we?
Conversational interfaces are well underway to change our perception of what a user interface is today.
We are used to control our devices by pushing buttons. But in the very near future interactions with devices and services will most likely be controlled by our shining voices, not our clumsy fingers.
But if artificial intelligence and conversational interfaces are to be successful, we need design to solve a great challenge:
What does artificial intelligence look like and how does it behave in different situations? And probably even more importantly: Does it even have to be visualized?
It’s time for designers come into play.
Today, if you google ‘artificial intelligence’ you’ll be met by this:
Futuristic visualizations of what we once imagined AI could look like, but also a bit old fashioned, right? They don’t really tell something about where we are heading today, tomorrow.
If we dive a bit deeper into the pond of artificial intelligence and look at conversational interfaces, the first visual example we stumble upon is often a visualization of services running on Facebook Messenger:
Simple screenshots from Messenger are strong storytellers because more than a billion people use the service and recognize the universe. The same goes for Siri, Apple’s virtual assistant:
We could come up with a few more examples, but you get the idea: We are not used to visualize artificial intelligence because it is still rather new. And this is where it gets interesting, because how does it even look?
Should it even look like something, or just remain individual conceptions in our minds?
For designers, the opportunity to give a face and a voice to artificial intelligence is probably the toughest and most interesting design challenge in the last 50 years.
If conversational interfaces are going to be dominated by voice, all physical interactions could be obsolete and replaced by only a voice and the machine’s ‘ear’ to perform actions.
In 2015, Amazon launched Echo and got all of us to befriend their new voice-based assistant Alexa. Amazon Echo is capable of voice interaction, music playback, making to-do lists and much more.
Recently, Google launched Home, a voice-based unit that is clearly aimed to take on Amazon Echo.
Aside from the obvious design of the physical product, the technology behind both Amazon Echo and Google Home has no real visual interface.
There’s not much to look at, nothing to poke around inside of, nothing to scroll through, and no clear boundaries on what it can do.
That is why we need to design around conversational interfaces, and consider all aspects in which design can play a role.
What if, ultimately, visual identities as we know them will be replaced by audible identities?We don’t know the answer yet. But we know how we are going to get it.