Wikitravel editors abandon Internet Brands, join up with Wikipedia

On July 11, 2012, the Wikimedia Foundation of Wikipedia fame made a decision that has been a long time coming: they decided to support hosting a new wiki devoted to travel, populated with Wikitravel content and, most importantly, the community that built Wikitravel.  It’s not a done deal yet, as the decision has to be confirmed by public discussion, but as it’s looking pretty good so far; and if it comes true, this second shot at success is almost certain to result in the new gold standard for user-written travel guides, in the same way that Wikipedia redefined encyclopedias.

Let me start by making it clear that this is a personal blog post that does not claim to represent the view of all 72,000+ Wikitravellers out there, much less the Wikimedia Foundation.  I’ve played little role in and claim no credit for making this fork (legal cloning) happen, and my present employer Lonely Planet has nothing to do with any of this.  However, as a Wikitravel user and administrator since 2004, who has done business with Wikitravel’s current owner Internet Brands and seen first hand how they operate, I’ll take a shot at answering three questions I expect to be asked: why the fork is necessary, whether the fork will succeed, and how Internet Brands will react.

First, a quick history recap.  Founded in 2003 by Evan Prodromou and Michele Ann Jenkins as a project to create a free, complete, up-to-date and reliable world-wide travel guide, Wikitravel grew at an explosive pace in its initial years and seemed on track to do to printed travel guides what Wikipedia had done to encyclopedias.  But in 2006, with ever-increasing hosting and support demands and no money coming in, the Prodromous made the decision to sell the site to website conglomerate Internet Brands (IB), best known at the time for selling used cars at CarsDirect.com.

IB made many promises at the time to respect the community, keep developing the site and tread carefully while commercializing it.  The German and Italian wings of Wikitravel didn’t believe a word it, so they rose up in revolt and started up Wikivoyage, the first fork of Wikitravel, which did successfully supplant the original for those two languages.  But the rest of us, including myself, opted to give IB a chance and see how things turned out.

Now to give Internet Brands credit where credit is due, it could have been considerably worse.  They’ve kept the lights on for the past 5 years, although overloaded or outright crashed database servers often made editing near-impossible.  They have respected the letter of the Creative Commons license, if not the spirit, as from day one they have refused to supply data dumps.   And they grudgingly abandoned some of their daftest ideas, like splitting each page into tiny chunks for search-engine optimization, after community outcry.  On a personal level, I also dealt with IB while running Wikitravel Press, and while they could be a tough negotiating partner, whatever they agreed on, they also delivered.

What they did not do, though, was develop the site in any way that did not translate directly into additional ad revenue.  The original promise to restrain themselves to “unobtrusive, targeted, well-identified ads” soon mutated into people eating spiders and monkey-punching Flash monstrosities, with plans to cram in a mid-page booking engine despite vociferous community opposition.   Once Evan & Michele were kicked off the payroll, bug reports stayed unattended for years, and neither did a single new feature come through, with the solitary exception of a CAPTCHA filter in a feeble attempt to plug the ever-increasing amount of spam.  Even the MediaWiki software running the site was, until very recently, stuck on version 1.11, five years and a full eight point releases behind Wikipedia.  Unsurprisingly, the once active community started to fade away, with all of Wikitravel’s statistics (Alexa rank, page views, new articles, edits) slowly flatlining.

By 2012, with various feeble ultimatums ignored by IB and no other way out in sight, the 40-odd admins of the site got together and decided to fork. After a short debate and a few feelers sent out in various directions, unanimous agreement was reached that jumping ship to the Wikimedia Foundation (WMF) was the way to go, with Wikivoyage also happy to join in.  Reaction on the Wikimedia side was almost as positive, and as I type this the birth of a new, truly free travel wiki appears to be only weeks away.  (Sign up here to be notified when it is!)

The natural question is thus, which of the two forks will win?  Internet Brands has triggered many a community revolt before, but the track record of those revolts is distinctly mixed.  QuattroWorld has found a stable user base but is still below AudiWorld in traffic rank; Cubits.org did not put a dent in Dave’s Garden; and the jury is still out on FlyerTalk vs MilePoint, but FlyerTalk retains a commanding lead.

Nevertheless, in Wikitravel’s case, I feel confident in predicting the answer: the new fork will win, by a mile.  Many of the reasons are clear — Wikitravel’s license allows copying all the content, nearly all editors and admins will jump ship, and the Foundation’s technical skills in running MediaWiki are second to none — but one takes some explaining.

The primary reason Wikitravel shows up so well in Google results is that it is linked from nearly every article about a place in Wikipedia.  Now, ordinary garden-variety links from Wikipedia to other sites are ignored completely by Google, because they have the magic anti-spam rel=nofollow attribute set.  However, Wikitravel is one of a very few sites that are linked through an obscure feature called “interwiki links“, which do not have that attribute set, and are thus counted in full by Google when it computes the importance of pages.  Thus, the moment those links are changed to point to the new fork — and all it will take is one edit of this page — the new site will be propelled to Google fame and Wikitravel.org will begin its inexorable descent to Internet obscurity.

The final question thus presents itself: How will Internet Brands react?  We have some clues already: as soon as they twigged on, they simultaneously pleaded that everybody return to their grandmotherly embrace, tried to spin the fork as a “self-destructive” rogue admin coup against a Nixonesque “silent supermajority”, and attempted to censor discussion on Wikitravel itself.  When these attempts unsurprisingly fell flat, the phone lines started ringing, with head honcho Bob “Passion to Mission” Brisco calling up the WMF with promises of “innovative collaboration” if only they can keep their sticky fingers in the pie.

From Wikitravel’s point of view, it would obviously be best if Internet Brands cheerfully admitted defeat and handed over the domain and trademark to the WMF, which would avoid the necessity for a messy renaming. However, having followed the (private) discussion from the sidelines for a few days now, Internet Brands insists on keeping full control of the site and minting advertising money, and all they want from the WMF is a seal of approval, paid for with a slice of the loot.  The non-profit Foundation, on the other hand, aims simply to freely share knowledge and has a long-standing aversion to advertising, so all they are able to offer is an easy way out from what will otherwise be a PR disaster.  I’d still like to hope a deal can be done, but quite frankly, the gap between these two positions does not look bridgeable at the moment.

The other extreme is that Internet Brands tries to prevent or sabotage the fork via legal action, as they did in the vBulletin vs XenForo case that’s apparently still rumbling through the courts.  I think this is even more unlikely though: all they own is the Wikitravel trademark and domain, so as long as the new (and presently undecided) name is sufficiently dissimilar, they will not have a legal leg to stand on.  Unlike the XenForo case, there are no employees jumping ship, the software is open source, and the content itself is Creative Commons licensed and can be copied at will.

The most likely option is thus status quo: IB will keep doing the only thing it can, squeezing every last drop of revenue from visitors venturing in, and probably turning up the infomercial volume to 11.  But with the community soon to turn into a ghost town, and increasing numbers of spammers and vandals dropping in to trash the place with nobody left to clean up after them, they will probably have to disable editing sooner or later, and Wikitravel.org the site will die a slow, ignominious death.

It remains to be seen if the new travel guide can succeed among a broader public: travel information online and collaborative writing have both moved on since 2003, and there are still unresolved problems with asking users to write and agree on fundamentally subjective content.  But the new Wikitravel will remain the world’s largest open travel information site for the foreseeable future, and will certainly give the closed competition a run for their money.  Wikitravel is dead, long live Wikitravel!

To register your support or opposition to the fork proposal, please head to the Request for Comment on the Wikimedia Meta site.  Translations of the RFC into other languages are particularly welcome.  

The RFC is expected to run until the end of August, with a formal decision and the launch of the new site to follow soon thereafter.  To be notified if and when the new site it goes live, please sign up at this form.  You will receive a single mail, and your e-mail address will then be thrown away.

Update: On September 5, the Wikimedia Foundation officially announced that they will proceed with the fork, and contrary to my optimistic prediction, Internet Brands is suing everyone left, right, and center.  See follow-up post.

Update 2: The new site, called Wikivoyage, was launched on January 15, 2013 and is already better than Wikitravel ever was.

Advertisements

Why the Web will gut paid e-books and apps, and why free can pay for authors and publishers

Selling digital content at any price above zero is not sustainable: the Web is cheaper for readers, cheaper for writers and publishers, and far more discoverable and shareable than the squabbling hermit kingdoms of e-books and apps.  For both authors and publishers, the best strategy is to distribute for free and find another way to pay the bills. (Part 2 of 2.)

Back in 2008, I attended the Frankfurt Book Fair, our little Wikitravel Press stand in Hall 4.2 just around the corner from the main area for technical talks.  And whenever there was something about e-books on, suddenly the hall would fill with sweaty publishing execs in cheap, crumpled suits, craning their heads and hoping against hope to hear and believe the message of joy: “Printed books may die, but paid digital content will save you!  Just keep calm, carry on, and sell your books as e-books and apps instead!”

For a publisher, this vision of beauty is an immensely seductive proposition: keep your business model, keep your pipeline, keep your editorial process.  Sell a slightly-tarted up version of your print-ready book, turned into an ePub or .mobi or iOS app or whatever flavor of the day your snake-oil CMS merchants tell you need, and as a bonus get rid of all that tedious faffing about with print runs, distribution and unsold stock.  And now, 5 years later, it all seems to be coming together!  What could possibly go wrong?

Only one thing: for the vast majority of publishers, paid content is as real as green-haired fairy princesses, because the Web will gut the business model for paid apps and e-books.  There are three reasons for this.

First and foremost, you can’t beat the Web on price.  The price of a printed book has been established through decades of trial and error: it accurately reflects the cost of creating and distributing the physical book, the price the market will pay, the level of competition with other printed book publishers and the margin the publisher needs to survive.  The current price of e-books and apps, on the other hand, is entirely disconnected from the actual cost of creating and distributing each additional copy, which is essentially zero. If the same content, or at least substitutable content, is available on a website for free — and the Internet being what it is, the answer is usually “yes” — there will be relentless price pressure to drive those prices down to match.  Forget $9.99 e-books or even $0.99 e-books: the price point to beat is $0.00.

Second, the Web allows drastically lower overheads for connecting authors to readers.  Building e-books and getting them distributed, much less building mobile applications and getting them into the famously developer-hostile iTunes store, are arcane arts limited to expensive professionals wearing propeller beanies.  Any monkey with a keyboard, on the other hand, can hammer out and publish a blog or forum post, and while the vast majority of them deservedly sink without a trace, a truly original or insightful idea will go viral on its own merits.

Third, apps (eg. iTunes) and e-books (eg. Kindle Store) are walled gardens, and history tells us that walled gardens always lose.  Minitel, Compuserve, America Online etc all restricted the users to officially approved islands of inaccessibility cut off from the rest of the Net, and despite an initial run of success due to clean, well-integrated interfaces and lots of industry players taking advantage of easy ways to bill users, none could compete in the long run with the sheer breadth of content and what Technology Review‘s Jason Pontin recently dubbed the “linky-ness” of the Web.  Probably the simplest way to visualize just how crippling these walls are is to simply search for (say) *Paris* with your favorite search engine, and see how many links to apps and e-books you get back: you’ll find the answer is zero.

The forces outlined here are clear and inescapable, and they mean that it will gradually become harder and harder to profit simply by selling copies.  And once there are no copies to sell, and no bookstores to sell them to, the last justifications of a traditional publisher’s existence — sales, distribution and chasing up invoices — disappear, with editing, design and marketing becoming optional add-ons instead of mandatory parts of the package.

The solution?  Join the light side of the force, throw away your precious business model, and become a website yourself.

Now, it’s easy to fall into the trap of assuming that just because the vast majority of websites are free to access, they must also be free to produce, and hence it must be a losing proposition to pipe content that has been paid for into a free website.  This is, of course, a fallacy: the incremental cost of serving an additional reader via the Web may be virtually zero, but keeping any website of significance up and running is an expensive proposition. TripAdvisor, famed purveyors of travel information they notionally didn’t pay a cent for, had operating expenses of $338.5 million last year, a large chunk of which went into paying people to fish out the most egregious chunks of spam from their firehose of contributions.  In the dead trees publishing world, this is called “editing”, and while TripAdvisor’s focus is very much on quantity over quality, others may choose the opposite.

Authors in this new world will thus have a choice.  One option is to exchange risk for the certainty of a fixed but low paycheck and write work-for-hire for a website that monetises itself with any of the existing business models out there for the Web: advertising, transactions (brokerage), associated merchandising, etc.  In the world of reference publishing, including travel, work-for-hire is already the norm and these authors will see little difference — assuming, of course, that the companies they work for survive the transition, which is by no means a given.

The more exciting but financially dangerous choice is to strike out on their own.  If your main goal is to share your writing or ideas with the world, the digital world is your oyster: start blogging and promoting, and worry about money later.  If Karl Marx was publishing The Communist Manifesto today, would he make it a website or a $0.99 e-book?

If you already have a significant following and would like to turn it into a career, simply asking your fans for money may work, but the guaranteed advance revenue of Kickstarter-style crowd funding seems more appealing; Seth Godin recently just pulled in $130,000 in a few hours.  Cory Doctorow famously gives away copies of all his e-books and makes it back in increased print sales, a format which, much as we like it to diss it, will be around for a while, especially in the deluxe hardcover editions that fans love and authors earn well from.

Now here’s the catch: both these authors could easily charge for what they write, since some of their fans would pay to unlock the gate and pass through the digital wall to read their next book.  But they choose not to, since every book they lock away represents one less opportunity for a new fan to find them.

And if you are publishing your first novel, you would be a fool to barricade yourself in a digital fortress and hope that some greater fool is willing to take a punt on paying you even 99 cents, when there is an ever-increasing plethora of free alternatives.  Achieving fame as an aspiring novelist has always been a long shot, why sabotage your already meager odds for the 34 cents that are left over after Amazon takes its 65% cut?  As Cory says:

There has never been a time when more people were reading more words by more authors. The Internet is a literary world of written words. What a fine thing that is for writers.

Did you miss Part 1?  Check out Eat yourself or be eaten: a tale of two travel publishers.

Eat yourself or be eaten: a tale of two travel publishers

Success in print publishing does not translate to success in digital publishing, and many common measures for digital success mislead.  The primary medium of the future is the unchained Web, and restricting your free content offerings out of fear of cannibalization will only lead to somebody else eating your readers instead.  (Part 1 of 2.)

Today, we’re going to look at some charts, comparing Alice Publishing with Bob Publishing.  These are both thinly disguised real travel publishers, but as my intention is not to slag or praise any specific companies, I’m using the aliases so we can focus on them as examples.  Like the CIA, I will neither confirm nor deny any putative identifications in the comments, and sloppy speculation may lead to waterboardings from fellow commenters.  (Obligatory disclaimer: neither Alice nor Bob is my employer, Lonely Planet; and as always, this blog represents no one’s opinions but my own.)

Volume of printed books sold

Both Alice and Bob are big names in travel publishing.  According to Bookscan, Alice is a contender for the top spot in much of the world when it comes to volume of printed books sold, shifting around 1.7 million books last year.  Bob is a few spots down the pecking order, sellling around 800,000 copies, which is still more than respectable but means they’re only about half of Alice’s size.


Print vs digital revenue (parent companies)

Now shipping around pallets of dead trees is all well and good, but how are they faring at bits and bytes?  Neither Alice nor Bob are telling directly, but the publishing conglomerates that own them do, and Alice’s owner is only too happy to tell us they’re the industry leader for the hottest figure in today’s publishing industry, print vs digital revenue, pulling in 33% last year and promising to be the first to break the magical 50% barrier as early as this year.  On the other hand, Bob’s corporate masters only managed to reach 20%.  Strike two for Bob.

There’s a reason, or actually two, why publishers like the “print vs digital revenue” figure so much.  First, the worse your print sales collapse, the higher the share of digital revenue goes. Indeed, Alice’s print sales dropped 14% last year, while poor Bob was whacked by almost 20%, boosting their digital shares by a handy ~4%, a third of the putative growth.  And second, “digital” is a sufficiently fuzzy term that it’s pretty easy to redefine it to your advantage.  In Alice’s case, its owner’s “digital” revenue includes a giant educational services arm, and would thus better be described as “not print”.  Another publisher not considered today goes further and includes all the printed books they sell from their website in their “digital” sales.


E-books vs printed books (estimate)

A more reliable indicator of how well a publisher is actually transferring their readers over from print to paid digital is the split of e-books to printed books.   Alas, precisely because this number is nowhere near as flattering, publishers are very reluctant to disclose even volume shares, much less revenue shares or, heaven forbid, actual sales figures, and BookScan doesn’t have any data either.   Alice’s owners offer precisely one figure: of all books sold last year, 14% were e-books, and while I’d wager the split for travel guides was more in print’s favor, that’s the best I’ve got.  Bob and company are even more tight-lipped, offering up only the meaningless puff of “triple-digit growth in e-book sales”, so I’m going to assume that they managed to pull in the industry average of maybe 8% or so.

Strike three — but Bob’s not out, and in fact, I think Bob is much, much better placed than Alice to survive through the digital revolution.  Here’s why:


Millions of readers per year

Around 5 years ago, Alice launched a flashy website with lots of ads and minimal content.  It won a bunch of obscure design awards and has been gathering dust ever since: Alexa estimates they get around 1500 visitors a day, which works out to 360,000 a year.  With e-books and apps still on the level of a rounding error, Alice’s total number of readers for print and digital combined is thus around 2 million a year.

Bob, on the other hand, has been working on their website since 1996 with a simple two-point philosophy: post everything on your website for free, and don’t worry about cannibalizing your printed books.  This is why they now pull in around 3.6 million unique visitors a month, which translates to over 43 million a year, or a total readership of nearly 44 million a year.  That’s 22x more than Alice!  So when Alice’s brand loses its dominance on bookstore shelves, because there are no more mass-market bookstores and thus no more shelves, which of the two can still connect with readers?


Direct revenue (US$) per reader per format, Alice Publishing

“So what?”, I hear the hard-nosed publisher snort. “Website freeloaders add nothing to the bottom line!”   Indeed, when it comes to direct revenue per reader, everybody who buys a book from Alice chips in around $15, buyers of Alice’s e-books pay around $12, and people who download Alice’s apps pay around $6 a pop.  People who visit Alice’s website, on the other hand, pay approximately nothing.  Isn’t it thus completely contrary to your own interest, downright crazy, to offer free content that drives people away the paid products?

If we were dealing only with printed books, the answer would of course be “yes”.  If Bob started giving away their books for free, they would quickly conquer the market and demolish Alice’s sales.  But they cannot do this sustainably, because it costs real money to print and distribute books, and that’s why the price of a printed travel guide from any publisher has converged to around $15.

But in the digital world, once you have created a piece of content, there is virtually no cost to distributing an additional copy of it.  The equilibrium price is thus zero, and if you don’t distribute your content at that price, somebody else will, and they’ll eat you alive. That’s why Bob is already busily kneecapping Alice’s (already fairly pathetic) app sales by offering their own city apps for free; and that’s why the biggest threat to Alice is not Bob, but Charlie Digital, whose travel website gets more readers every day than Alice gets in a year.


Millions of readers per year, version 2

And the kicker?  Charlie, better known as TripAdvisor, made a profit of $177 million last year and is tracking to improve on that this year — and it pulled off this trick without charging for any of its content.

Keep reading for part 2, in which we’ll take a look at why getting readers to pay for their content directly will prove unworkable for the vast majority of publishers, and how the creation of quality content can be funded nonetheless.

Designing the Travel Guide of the Future, Augmented Reality Edition

In my previous post on the Travel Guide of the Future, I glibly dismissed the possibility of an augmented reality interface as a form factor, because “we haven’t managed to figure out a decent portable interface for actually controlling the display … it’s looking pretty unlikely until we get around to implanting electrodes in our skulls.

Two weeks later, word leaked out about what was cooking at Google X, and last week Google officially announced Project Glass.  Oops!  Time to eat my words and revise that assumption in light of the single most exciting announcement in travel tech since, um, ever.

As it happens, augmented reality displays are a topic I have more than a passing familiarity with: for my master’s thesis back in 2001, I built a prototype wearable translation system dubbed the Yak-2, using a heads-up display.  At the time, the MicroOptical CO-7 heads-up display (pictured above) was state-of-the-art military hardware reluctantly lent to researchers for $5000 a pop; it’s almost surprising that, in the ten years that have passed, it’s not much different from what Google is using today, which the smart money seems to think is the Lumus OE-31.

Credentials established?  Let’s talk about what challenges Google face today.

User interface: actually using the darn thing

Hardware

The absolute Achilles heel of wearable computing for me, for Google and for everybody who has ever tried to popularize the darn things and failed is the user interface.  Every mainstream human interface device used for computing devices — keyboards, touchscreens, mice, trackballs, touchpads, you name it — is intended to be operated by hand pressing against a surface, and that’s the one thing you cannot sensibly do while operating a wearable computer.   A lot of research has gone into developing ways around this, but none have gained traction as they all suffer from severe drawbacks: handheld chording keyboards (extremely steep learning curve), gesture recognition (limited scope and looks strange), etc.  My Yak prototypes used a handheld mouse-pointer thingy, which was borderline functional but still intolerably clunky, and speech recognition, which worked tolerably well in lab conditions with a trained user, but fell flat in noisy outdoor environments.

Based on the Glass Project concept video, Google is trying their luck with speech recognition, a tilt sensor for head gestures, plus — apparently — an entirely different interface: eye tracking, so you can just look at an icon for a second to “push” it.  (Or so it seems; the other possibility is that the user is making gestures off-camera, although the bit where he replies to a message while holding a sandwich makes this unlikely.  While easier to implement technically, this would be far inferior as an interface, so for the rest of this post I’m going to optimistically assume they do indeed use eye tracking.)

The radical-seeming concept is actually not new, as eye tracking is a natural fit for a heads-up display.  IBM was studying this back around 2000 and ETH presented a working prototype of the two in combination in 2009, but Google’s prototype looks far more polished and will be the first real-world system deploying the two simultaneously that I’m aware of.  Problem solved?

Software

Not quite.  The biggest of Google’s user interface problems is that they now need to develop the world’s first usable consumer-grade UI for actually using this thing.  As the numerous painfully funny parodies attest, it’s actually very hard to get this right, and Google’s video glosses over many over of the hard decisions that need to made to provide an augmented reality UI that’s always accessible, but never in the way.  How does voice recognition know to differentiate when it’s supposed to be listening for commands, and when you’re just talking to a buddy?  How does the software figure out that moving the head down when stretched should pop up the toolbar, but moving it down to pour coffee should not?  You can only presume there are modes available “full UI”, “notifications only” or “completely off”, but without physical buttons to toggle it’s difficult even to figure out a solid mechanism for switching between these.

And that’s just for user-driven “pull” control of the system.  For “push” notifications, like the subway closure alert, Google has to be able to intelligently parse the user’s location, expected course and a million other things to guess what kinds of things they might be interested in at any given moment — and, yes, resist the temptation to spam them with 5% off coupons for Bob’s Carpet Warehouse.   Fortunately, this kind of massive data number-crunching is the kind of thing Google excels at, and the glasses will presumably come with a limited set of in-built general-use notifications that can be extended by downloading apps.

As a reference point, it’s taken Android ten years to get most of the kinks worked out from something as simple as message notifications on a mobile screen, and even UI gurus Apple didn’t get it right the first time around.  It’s pretty much a given that the first iterations of Project Glass will be very clunky indeed.

Incidentally, while the video might lead you to believe the contrary, one problem Google won’t have is the display blocking the entire field of view: the Lumus display covers only a part of one eye, with your brain helpfully merging it in with what the other eye sees.

Hardware: what Google isn’t showing you

Take a careful look at Google’s five publicity photos.  What’s missing?  Any clue of what lies behind at the other end of the earpieces, artfully concealed with a shock of hair or angled face in every single shot.  Indeed, Lumus’s current displays are all wired to battery packs to serve that energy-hungry display (just like my CO-7 back in 2001), although apparently wireless models with enough capacity to operate for a day are on the horizon and Sergey Brin was “caught” (ha!) wearing one recently.

Display aside, though, the computing power to drive the thing still has to reside somewhere, and even with today’s miracles of miniaturization that somewhere cannot be in inside that thin aluminum frame.  Thus somewhere in your pocket or bag there will be phone-sized lump of silicon that does the heavy lifting and talks to the Internet.  The sensible and obvious thing to do would be to use an actual phone, in which case the glasses just become an accessory.  This kills two birds with one stone: it conveniently cuts down what would otherwise be a steep pricetag of $1000+ into two more manageable chunks of $500 or so each (assuming Google initially sells the Lumus more or less at cost), and it provides extra interfaces in form of a touch screen and microphone that can be used for mode control and speech recognition (eg. press button and hold phone up to mouth to voice commands).

Killer app: travel guide or Babel Fish?

Google is quite clearly thinking about Project Glass as just another way to consume Google services: socialize on Google Plus, find your way with Google Maps, follow your friends with Latitude, etc.  While some of this obviously has the potential to be very handy, and almost all of it certainly qualifies as “cool”, without anything entirely new the device runs the risk of becoming the next generation of Bluetooth headset, a niche accessory worn only by devoted techheads.  The question is thus: what sort of killer apps this device could enable as a platform?  Obviously, my interest lies in travel!

So far, most augmented reality travel apps have assumed that reality + pins = win, but this doesn’t work for augmented reality for precisely the same reason it doesn’t work for web apps:

As a rule, people do not wander down streets randomly, hoping that a magical travel app (or printed guidebook) will reveal that they have serendipitously stumbled into a fascinating sight.  No, they browse through the guide before they leave, or on the plane, or in the hotel room the day before, building up a rough itinerary of where to go, what to see and what to beware of.  A travel guide is thus, first and foremost, a planning tool.

Which is not to say Project Glass won’t have its uses. Even out of the box, turn by turn navigation in an unfamiliar city without having to browse maps or poke around on a phone, is by itself pretty darn close to a killer app for the traveller, and being able to search on the fly for points of interest is also obviously useful.

But probably the single most powerful new concept to explore is what I poked around with in 2001, namely translationWord Lens/Google Goggles type translation of written text is obvious, but the real potential and challenges lie in translation of the spoken word.  Using the tethered phone’s microphone and speaker, it should be possible to parse what the user says, have them confirm it on screen, and either have them try to read it out or simply output the translated phrase via the speaker.  Depending on how good the speech recognition is (and this is pushing the limits today), it could even be possible to hand the phone over to the other person, have them speak, and have the glasses translate that instantly.  And if both parties are wearing the glasses, with a microphone and an earphone, could we finally implement the Babel Fish and have unobtrusive simultaneous translation, with the speech of one rendered on the screen of the other?  This may not be science fiction any more!

Conclusion

Project Glass has immense potential, but like most revolutions in technology, people are likely to overestimate the short-term impact and underestimate the long-term impact.  The first iteration is likely to prove a disappointment, but in a few years’ time this or something much like it may indeed finally supplant the printed book as the traveler’s tool of choice on the road, and create a few new billion-dollar markets in the process.

Slicing the fruitcake of atomic content

Atomic content is the buzzword for distilling travel information into its smallest possible units, the atomlike single points of interest (POIs) that stud the pages of a guidebook, in the same way that nuts stud the innards of a rich, rummy fruitcake.  And the premise certainly sounds seductive: once these nuts are liberated from dough tying them together, so the theory goes, then they can be repackaged and recombined into all sorts of new, sexy pastries and confections.  Baklava!  Pistachio ice cream!  Nutty monetized eyeballs!  The sky is the limit!

And from a purely technological point of view, atomic content does make huge amounts of sense.  Tag POIs with geographical coordinates and store them in a database, and all sorts of neat things that you can’t do with a printed guidebook suddenly become easy: you can create a dynamic map that can pan and zoom, you can serve them up in a car navigator, you can get a list of all restaurants within 100 meters of a point.  Whee!

Yet the fallout of decomposing a fruitcake into its constituent parts is insidious and dangerous.

People rarely count the number of nuts in a slice of fruitcake, but if the price is the same, any shopper will take the 1 kg bag of nuts over the 500g bag of nuts.

Atomic content emphasizes quantity over quality.  A guidebook lists only the best 10 restaurants for a town, not all 500, but any feed consumer will prefer the feed of 500 POIs to a feed of just ten.  This pervesely incentivizes lowering the bar and keeping around old and even actively harmful information simply to inflate the POI count.

The only perceptible difference between the various brands of peanuts at your local supermarket is price.

Atomic content is commoditized.  There are a handful of sources out there for local POI information, all offering the same basic bits and bobs of information: name, address, coordinates, perhaps a telephone number, web address or opening hours.  The only differentiating factors are quantity (see previous point) and, as distant second and third, accurary and recentness, resulting in a race to the bottom that only the largest can win.  The quality of the review doesn’t really figure as far as Google is concerned, which means that user-written reviews, even if they’re crap or spam, can easily trump professionally authored reviews.

If I pick an almond from a bag of mixed nuts and eat it, there is nothing stopping me from eating a macadamia or cashew next.

Atomic content is not sticky.  If I Google “how do I get from Narita airport to central Tokyo“, Google will return 140,000 pages all advising me to take the train, not a taxi.  Having gleaned that atom of information off the page — and the way the Internet works, it’s the page most whole-heartedly devoted to answering that question and only that question that will bubble to the top — I have no incentive whatsoever to stay on the page, and odds are extremely high that the next question I ask, whatever it may be, will take me somewhere else entirely.

It’s the dough that makes or breaks the cake.

Ever heard somebody praise the quality of the raisins in a fruitcake?  Me neither.  Atomic content lacks the core of good travel information: the prose that binds it all together.  A sentence or two telling me where to find cheap late-night eats and where to go for romantic haute cuisine dinners is a far more useful starting point than an alphabetically organized phonebook of restaurants.

People value a slice of good nutty cake far more highly that a bag of nuts.

But the biggest morsel to take away is this: travel itself is not atomic.  Yes, there are occasions when I want to find an atom of information like “a good Chinese restaurant open for lunch in Northbridge”, but the true value of a guidebook is when it can help you with Rumsfeld’s “unknown unknowns“: the things I should know about a destination, but do not know to ask about.  I will not Google “gem scams in Bangkok” if I have never heard of the Thai capital’s shifty diamond dealers; I will not take a 30-km detour to an awesome museum in a neighbouring town unless someone tells me it exists.

The way to compete against atomic content is thus not to play the atomic content game, at least not to the extent of letting it corrupt your core cake-baking skills.  Bake cakes that are dense, filling, rich and nutty, which taste good from the first bite and leave the traveler hungry for more.

User-generated content: what went wrong and why it still matters

The buzzword user-generated content (UGC), aka “crowdsourcing”, is starting to sound a little 2007, with the cool kids having moved on to hype social travel, itself a subject worthy of a future post.  So what was the original promise, why didn’t it pan out as expected, and is there still a future for it?

The Promise

I’ll lay my own bias on the line up front: I’ve been contributing to wiki-style “user-generated” sites like Everything2, Wikipedia and Wikitravel for over ten years now, and was sufficiently impressed by the last of these to throw away a steady job and take a stab at spinning off Wikitravel Press as a commercial publishing business.  Coming from a software development background and thus familiar with the battle between open source software created from the bottom-up vs closed, commercial software decreed from the top down (see Eric S. Raymond’s The Cathedral and the Bazaar for a primer), it seemed obvious that the traditional cathedral model of guidebook publishers deciding what to sell, toiling away behind high walls to package it up, and selling the end result at a stiff markup was doomed to eventually lose to the raucous bazaar of travellers swapping free tips online, with the cream percolating to the top in the spirit of same happy collaborative anarchy that created Wikipedia.

My initial experiences at Lonely Planet only served to reinforce my belief.  Until recently, the company has revolved entirely around a well-oiled machine for turning authors’ raw manuscripts into polished guidebooks, with tight focus and strict quality control, and this model has served them well over the 38 years that they were competing against other guidebook publishers trying to do the same.  But now that the competition can get their content for free from countless contributors around the globe and distribute it at virtually zero cost, how can they possibly afford to keep paying not just the writers themselves, but editors, proofreaders, cartographers etc as well?

The Reality

Funny thing is, it turns out that crowdsourced content in general (and travel content in particular) isn’t quite the panacea people expected to be.  A few user-generated travel sites have certainly prospered, most notably TripAdvisor, which takes the hands-off approach of letting anybody post anything about everything and leaving it to the reader to sort out the wheat from the chaff, providing only a taxonomy of places and points of interest for navigating through it all.  This works great if you already have a hotel or two in mind, want to read about them in detail, and have your bullshit detector fine-tuned well enough to filter out the reviews by touts and fruitcakes; alas, it’s next to useless if you’re trying to, say, find a nice winery inn to stay in Melbourne’s Yarra Valley, since information about regions is hopelessly scattered, or even a nice, affordable hotel in Tokyo, since you’re given a list of 640 and made to sort through them yourself, with no way to figure out if you should be basing yourself in Shibuya or Shinjuku.

Wikitravel set out to address this by taking a leaf from Wikipedia’s book and allowing users to edit as well as write, with the explicit goal of creating a readable end-to-end travel guide, instead of just a scattershot collection of factoids and opinions.  Now eight years old, the site is still trundling along and even slowly increasing its Alexa rank as of late, but it has never quite achieved the mass-market impact of Wikipedia.  The reasons why are varied and complex, and being taken over in 2006 by a used-car company and frozen in time interface-wise probably didn’t help, but at the end of the day the problem may boil down to a series of fundamental tensions between the open-to-all wiki model and the intention of a travel guide:

  • Wikitravel is meant to serve travellers, but it’s business owners that benefit the most from a good review.  Thus, while each traveller has a weak individual interest in ensuring that each entry is accurate and realistic, the business owner of that entry has a very strong incentive to ensure that it does not.  This is much less so at Wikipedia, where articles are rarely used by consumers to make purchasing decisions.
  • Wikipedia has an explicit goal of creating a neutral encyclopedia and a raft of policies that work towards this end: points of view, citations, references, etc.  Wikitravel has to rely on the subjective opinions of anonymous travellers, and when they are in conflict, it is not possible to say who is “right” and who is “wrong”: the only possible route is to strip out anything disputable and leave behind bland trivia.  This is not helped by the steady stream of Wikipedians coming in under the misconception that, as in Wikipedia, dull, unopinionated writing is a good thing.
  • If writing a neutral review is hard enough, then curating a neutral list of top attractions, best places to eat etc is even harder, especially for country or region-level articles.  These tend to be constantly subject to edit wars, with residents and business owners pitching for their own places and surreptitiously trying to remove others.

None of these forces are insurmountable, and those articles on Wikitravel that are watched like hawks by benevolent neutral caretakers can shine like finely polished jewels, but they do explain why the quality of Wikitravel articles varies so widely, why there are less truly usable Wikitravel articles than there are informative Wikipedia articles, and why none of the many companies out there trying to create automated guidebooks purely out of Wikitravel or other user-generated travel content have really pulled it off.  Other travel wikis, like TripAdvisor’s Inside, lack Wikitravel’s sense of community and thus fare even worse on all counts, the odd quality contribution drowned in a sea of spam.

Nevertheless, Wikitravel content is still used even by shiny new startups like Triposo, simply because there is nothing better out there.  The traveller, however, is not thus constrained, and that’s why they still willingly pay a premium to the traditional guidebook publishers for guaranteed quality, coverage and cohesiveness.

The Future

What then?  In the PC industry, the epic battle between open-source Linux and closed-source Windows fizzled out when Apple came out of the left field with OS X, which married open-source internals (Darwin) with a closed-source user interface (Aqua) smoothing out all the warts.  OS X now runs not only in Macs, but (disguised as iOS) in iPhones and iPads.  Apple pulled ahead of Microsoft in stock market valuation last year.

Likewise, I suspect the winner of the travel sweepstakes will be neither “UGC” nor “experts” alone, but the first travel company that manages to harness together a solid base of open content to build on, the raw power of a million travelers contributing and correcting, the iron fist of editors and curators pummelling it into shape, and the slick usability of a professionally designed and laid-out travel guide.  The pain point is money: pulling this together will not come cheap and the days of people paying $50 for a travel guide are almost over, yet in order to take off, the content must be deep, open to the world and not plastered with blinking banner ads.  Who will dare to take on the challenge?