The future of UI

 What is the future of the graphical user interface (GUI) when every action that we want is voice activated?

Apple implemented voice control for calling and listening to music. Google has just announced voice control for a whole raft of actions including:

  • search
  • text (no more thumb typing)
  • email
  • call
  • navigate to
  • listen to (especially cool because it can automatically start a Pandora channel or open lastFM)

And they are promising many more to come.

Voice recognition isn't new, but we know that sometimes all it takes is the an elegant execution of an old idea to make the next new thing. This is something that Apple is amazing at. But Google's mobile search is now being called from voice commands 25% of the time. That is a huge shift in the way people interact with search (one of the most popular computer actions).

Google's new voice commands work by recording and streaming your command to the cloud where Google's computers parse your command, figure out what you want, and return a command to make the phone complete your action. This is notable because they are essentially turning your mobile into a super computer, by offloading the hardest processing to the cloud.

All this science fiction makes me wonder - if (or when) voice commands become the new way to interact with a computer, where does this leave the GUI? Because if we follow this trajectory to its logical conclusion, we will no longer need the GUI for many of the most common actions on a mobile device. All we need the visual interface for is confirmation that the voice command is being performed properly.

If Google does this properly and allows apps and mobile Web sites to hook into its speech command API, any developer could potentially design an app to be voice activated.

Will Google or Apple pull it off? Is the time right for voice activation? When forced to use a keyboard will we soon say "How quaint."