TL;DR: Google is trying to position its Google Glass headset as a consumer device with the cool factor of an iPhone. But its initial users are likely to be businesses, and they will need to be convinced about the value it will deliver, not its appearance.
We fling about the word “revolutionary” with wild abandon these days. The primary hardware innovation of the Apple iPhone, for example, was really just an evolutionary step. Replacing a keypad with a touchscreen meant that, instead of holding your phone in one hand and watching its screen as you tap the keys with the other, you could now hold your phone in one hand and watch its screen as you tap the screen with the other. As we know, this seemingly subtle change proved to radically enhance the usability of the phone and set the benchmark for today’s smartphones — but they’re still smartphones.
Google Glass, on the other hand, is genuinely revolutionary piece of kit. As the first real consumer-grade attempt at an augmented reality computer, it completely dispenses with the screen, the keypad and even the entire “holdable” device itself. This means throwing out every user interface paradigm developed since the 1970s, when computers started to look like today’s computers, and building something entirely new to replace them. Gulp?
Yet Google appears to be petrified of something different: that the device will be perceived of as “dorky”. As you can see from the picture to the right, I can personally attest that this fear is not entirely misguided: real-life wearable computers (and their wearers) do tend to fall more on the side of “geeky” than “cyberpunky”. Google’s marketing to date has thus consisted nearly entirely of increasingly odd antics to make it “cool”: stunt cyclists performing antics on the roof of a convention centre, skydivers leaping out of airplanes and an entire fashion show with slinky models strutting their stuff.
But let’s step back in time. Imagined being offered the chance to clip a unwieldy, heavy plastic box to the waistband of your bell-bottomed pants, bolt two bright-orange foam sponges over your ears with a shiny metal hairband, and string these bits together with wire. Would you pay good money for this fashion disaster?
If it’s the 1970s, hell yeah: the Sony Walkman was a runaway hit. Never mind the clunky appearance, the mere fact that it for the first time let you listen to music anywhere was worth the sartorial price of admission. And without that ability, the minor miracles in miniaturising and ruggedizing of the unwieldy tape decks of yore necessary to produce the Walkman would have gone to waste.
But Google isn’t talking, at all, about what you can or, more importantly, could do with the Glass: their famous promotional video shows the capabilities of various existing Google apps doing precisely what they do now, only on a heads-up display. Sure, the user interface has changed radically, but the capabilities have not.
So will those existing apps on Glass be slick enough to make it a must-buy? Despite Google’s all-star developer team, their track record for customer-facing products is distinctly spotty and the sheer challenge of designing an entirely new way to interact would perplex even Apple. The little we know of the hardware also indicates that some technologies considered key to heads-up interaction, notable eye tracking, are not going to be a part of the package. It’s thus exceedingly unlikely that the first iteration of Glass’s UI will nail it, and Google’s reluctance to reveal anything about the interface’s actual appearance and behavior strongly hints that they have their doubts as well.
Odds are, then, that Google Glass will be a dorky-looking product that offers an inferior interface for the kind of things you can do easily with a modern mobile phone, which has, after all, evolved for 20-plus years in the marketplace. This is not a recipe for success in the consumer marketplace.
The solution? Sell the Glass on what it can do that nothing else can.
Five things you can do with Glass that you can’t with a mobile phone
1) Simultaneous interpretation. Hook up two Glasses so they can translate each user’s speech and beam it over to the other, where it is displayed as subtitles. Presto: you can now hold a natural conversation and track all the nonverbal communication that would be lost if you had to glance at your smartphone all the time.
(Not coincidentally, I wrote my master’s thesis on this back in 2001. My prototype was a miserable failure because computer miniaturization, speech recognition and my hardware hacking skills weren’t up to snuff, but I think Glass provides an excellent platform for producing something usable.)
2) Tactical awareness. A mobile phone app that shows the location of alerts and/or other security guards would be rather useless: what are you going to do, pull out your phone and start browsing your app directory when the robbers strike? The same application for an always-on Glass, on the other hand, is a natural fit.
(This, too, is by no means a new idea. MicroOptical’s heads-up display, the direct predecessor of the optics behind Google Glass, was the result of a DARPA grant for the US Army’s Land Warrior project. The pathetic fate of that project, which ran from 1994 before being cancelled in 2007 and kicked off again in 2008 without ever accomplishing anything of note, also hints at why Google is, probably wisely, steering far clear of the bureaucratic morass of military procurement.)
3) Virtual signage. Imagine an enormous warehouse filled with a variety of ever-changing goods, along the lines of an Amazon or UPS logistics center. Right now, to find a given package in there, you’d have to “look it up” on a PC or smartphone, get a result like “Aisle C, Section 17, Shelf 5” and match that to signage scattered all over the place. What if your Glass could just direct you there with visual and voice prompts, and show you the item number as well so you don’t have to print out and carry slips of paper? The difference sounds almost trivial, but suddenly you’ve freed up a hand and reduced the risk of getting run over by a forklift as you squint at your printout.
(Back in 2004, commercial wearable computing pioneers Xybernaut sold pretty much exactly this idea to UK grocery chain Tesco, but their machines were clunky battery hogs so it didn’t pan out too well. Xybernaut’s subsequent implosion after its founders were indicted for securities fraud and money laundering didn’t help.)
4) Surgery. Surgical operating theatres are filled with machines that regulate and monitor and display a thousand things on a hundred little screens, with tens of bleeps and bloops for various alerts and events. What if the surgeon could see all that information during a complex procedure, without ever having to take their eyes off their actual work?
(Once again, some products that do this already exist, but Glass has the potential to take this from an expensive, obscure niche to an everyday medical tool — once the FDA gets around to certifying it sometime around 2078, that is.)
5) Games set in reality. Mashing up reality and gaming is hard: countless companies have taken a crack at it over the past decade, and all foundered on the basic problem of having to use a tiny little mobile display as the only window into the game world. As Layar’s lack of success indicates, running around holding a phone in front of your face isn’t much fun, and relying on location alone to convey that there’s an invisible virtual treasure chest or tentacle monster in a physical alleyway stretches the imagination too much. But with an augmented reality display, this will suddenly change, and Valve is already making a big punt on it, although Michael Abrash rightly cautions against setting your expectations too high.
Notice one thing about the first four ideas? They’re all business applications, whose customers will willingly tolerate a clunky, somewhat beta interface as long as they can still get real dollars-and-cents value out of it. This is how both PCs and mobile phones got started, and once the nuts and bolts are worked out, the more mature versions can be rolled out to general consumers.
And once Glass (or something like it) reaches critical mass, we’ll suddenly have streets full of people with network-enabled, always-on video cameras, and a rather scary world of possibilities opens up. Add object recognition, and you can find litter, vandalism, free street parking spots. Add data mining, and you can spot the suddenly crowded new cafe or restaurant, or catch the latest fashion trend as it happens. Add face recognition, and you can find missing persons, criminals and crime suspects.
To Google’s credit, they are partnering with other developers almost from day one, and there will undoubtedly be even better ideas than these largely unoriginal off-the-cuff thoughts. We can only hope that the idea is spotted and executed well enough to turn it into Glass’s killer app… but if Google keeps on being awfully coy about Glass’s capabilities, limiting access to dinky two-day hackathons and envisioning Google+ as the main use case, that day may still be some way away.