Google Assistant was supposed to be Google’s future. The voice assistant was an early attempt at ambient computing that gave customers a one-on-one relationship with all of the company’s products. Sure, Google was directly responding to Amazon’s announcement of writes. They “can understand a finite list of questions and requests” and anything that falls outside of that won’t be understood. That more constrained call and response is why even conversations with multiple back-and-forths can feel wooden and restrictive. Ultimately, these voice assistants are looking for the right output for your input and nothing else, regardless of how many services are connected or how many “skills” (Alexa plugins) or “actions” (Google parlance for the same on Assistant) they gain.
Generative artificial intelligence, powered by large language models primarily, aims to be far more flexible. A generative AI like Google’s Gemini will respond to just about anything you ask it in something akin to human speech, and it can be interrupted and redirected with new questions and inputs without having to start from scratch. In other words, it’s contextual in the way that Assistant tried to be but never could become. Gemini, at first blush, seems to be a much better option as a default AI assistant or voice assistant when paired with a smart speaker, which is likely why Amazon is already planning on using generative AI in its Echo devices.
Swapping One Unreliable Assistant for Another
So Google Assistant might have stalled out because it could only go so far at being the helpful, personal version of Google that its namesake wanted it to be. Google Gemini is already available on Android devices (and via the Google app on iOS) and this year’s I/O keynote made it clear that Android 15 and the Pixel 9 phones coming out in the fall will leverage Gemini wherever possible for a more natural, “intelligent,” and personal experience.
Is a limited assistant worse than one that can actively mislead you?
The main problem is that no one is really sure if Gemini is any more reliable than Google Assistant ever was. Gemini certainly sounds more intelligent and seems more capable, but can you trust it? Google Assistant refusing to do something came with the confidence that something went wrong and the voice assistant just wasn’t able to work. Gemini’s mistakes aren’t even always recognizable as mistakes. It’ll confidently lie if you let it, and correcting its missteps requires double-checking everything it answers with. Is a limited assistant worse than one that can actively mislead you? I’d argue that, no, it’s not, but if there’s a theme for 2024, it’s that tech’s biggest companies feel differently.