What if Tim Berners-Lee had Invented Twitter?

This is one of those nicely ambiguous counterfactuals. Contrast, as David Lewis does:

If Caesar had been in command [in Korea] he would have used catapults
If Caesar had been in command he would have used the atom bomb

In this case I am thinking about catapults. I mean Twitter is obviously a much simpler system than the web and in that respect it might have been easier for a researcher in 1992 to dream up a good way of sharing citations amongst internet-connected scientists. Also the grammar of Twitter is much thinner, less ambitious and more constrained than the grammar of HTML. Precisely because Twitter is a lot simpler than the web, it would seem at least possible that it could have evolved sooner than it did, just as catapults came before atom bombs (OK — I know that we didnt have SMS in 1991 so it wasn’t there to be canibalised, but please don’t spoil the fun). Let us suppose that Twitter had come first, that Tim Berners Lee had come up with some consistent, open, distributed, asymmetrical protocols for sharing references and very short messages across the internet. Surely that was the kind of scientific communication tool and update system that the CERN bosses would have been looking for? They could have given him a straightforward promotion. Why did he have to give them so much more? What would be so different with the way the internet is now if he had given them less?

The web would be a lot smaller. There would not be so many big files. So many big web pages. The web would have been very different if TBL had limited a web page to 140K. The web as a Twitter-verse would presumably be a lot flatter, more horizontal. Fewer pools of database-driven depth. There would still have been spam (Twitter has about as much spam as the web). There would have still have been porn and nice graphics (Marc Andreessen could have saved himself several man months and invented Twitpic, which would have been much easier than writing Mosaic). There would also have been much more indirection (bit.ly, tinyurl would have taken the place of Yahoo and Google) a lot more opacity (domains, countries, languages being harder to organise in the Twitter framework). TBL would have found it as hard as the Twitter founders to discover a business model for the invention (that is the penalty for inventing a new syntax). But it could still have spread like wildfire, as indeed Twitter has spread like wildfire these last three years….

However it is a good thing that Tim invented what he did, catapulted us into the 21st Century with his super-collider of an invention and left it for Dorsey, Stone and Williams to pick up the other simpler, similar, idea. Later. These counterfactual musings are prompted by yesterday’s announcement that Twitter is now developing a new @anywhere service layer, which suggests to me that Twitter will become an even finer-grained and more diaphonous network, parasitic on the web, but aiming to interconnect as many web resources and web services as possible in a layer of commentary and shared perception. It is as though the Twitter founders are trying to fill the conversational interstices in the gaps left by the operation of web services. This proposition may have important implications for publishers and media.

Does XML really matter?

There is a new burst of enthusiasm for XML amongst book publishers. Mike Shatzkin, who often has cogent things to say, has produced a little encomium for XML in Publisher’s Weekly.

Here’s what we call the Copernican Change. We have lived all our lives in a universe where the book is “the sun” and everything else we might create or sell was a “subsidiary right” to the book, revolving around that sun.

In our new universe, the content encased in a well-formed XML file is the sun. The book, an output of a well-formed XML file, is only one of an increasing number of revenue opportunities and marketing opportunities revolving around it. It requires more discipline and attention to the rules to create a well-formed XML file than it did to create a book. But when you’re done, the end result is more useful: content can be rendered many different ways and cleaved and recombined inexpensively, unlocking sales that are almost impossible to capture cost-effectively if you start with a “book.” What the Hell Is XML? Publisher’s Weekly 15 Dec 08

At the risk of being taken to be the kind of oaf who burps loudly in the presence of royalty (questioning the supreme value of XML is a bit like breathing garlic all over her majesty), I am inclined to pour cold water over this.

XML has been with us for 10 years. It certainly has its uses, especially in managing large complex texts and integrating text databases. But XML has not been and is not the be-all and end-all of digital publishing. XML is a property of texts, a style of handling them for flexible representation. In the last five years (especially since Google Book Search started motoring) it has become increasingly apparent that the book-as-book is the critical output of book publishers. Indeed PDF’s are still a crucial component of the book publishing process and for many of the most useful applications of the digital book, the PDF file is the crucial starting point. Copernicus, after all, was right, the sun is the centre of the solar system. Books really do matter and they are at the centre of the GBS system.

In one crucial respect XML has been and is a damagingly misleading tool for publishers (as deleterious in its effects on newspapers and magazines as on books) it has encouraged the mistaken view that text objects can only be used on the web if they are repurposed. XML was invented primarily because it was seen as a flexible way of ‘marking up’ the incredibly diverse world of print in ways that could be reconciled with HTML and the web. Everything printed would be repurposed for the web and XML would facilitate this step. This now looks like it may not be an efficient way to look at things. Google Book Search and other digital representation platforms are showing us that repurposing a book or a magazine is not necessary and usually results in the loss of important information. It is certainly a mistake to suppose the XML is necessary if books are to be effectively used in the web or in databases — as Google Book Search, the largest print database, demonstrates. Above all, XML, and any particular implementation of XML is only as good as the design for which it was crafted, XML is not future-proof, and it is highly misleading of Shatzkin to recommend:

“You’ll save the most money right away if you create many books that are similar in structure and thus can be rendered from the same “style sheet.”

Books should only be similar in structure, and their texts should only share the same style sheet, if they are similar in purpose. A rigid XML style sheet for the whole of a publisher’s list is for many publishers a lousy idea. Designing, or selecting, your books to fit your style sheet is putting the cart before your horse.

Launch Party for a Blog?

Charkin Blog is a blog published in book form by Macmillan. I missed the launch party, but from several reports it was an enjoyable and intriguing event.

I was amused to see in the Macmillan page about the book that it is listed as weighing 0 Kg. That seems to me very light for a book of 576 pp. Could it be that the weight is indeterminate, customisable according to requirements, the book is after all Print On Demand? Perhaps one can order a deluxe version on Indian paper, that would weigh very little, but surely at least 0.2 Kg? The British Library (which presumably gets a free copy) should look after posterity with a copy printed on vellum.

The weightless version of Charkin Blog is still here.

Print going with the grain of the Web

It is an article of faith for Exact Editions that:

Print works well on the web when it is represented exactly the way it is.

Exact Editions works on the assumption that the web can re-present print perfectly adequately and there are many advantages in having magazines (and books) accessible on the web as exact replicas of the print editions. Actually, buried beneath this ‘article of faith’ is a very deep conviction that print publications are incredibly strong and will survive and prosper as digital editions (oddly enough, many in conventional publishing doubt this).

On the other hand, we do in various ways try to enhance or improve the print editions so they work better as a web resource than would a mere print replica. First, by making the titles individually and collectively searchable. Second, by adding elements of helpful interactivity (clickable contents pages, e-mail addresses, URLs, phone numbers and ISBNs, for example).

For all these reasons, we find it hard to think of digital publishing as being inimical to print publishing, to reading, or indeed to civilisation as we know it. If you want some gloomy hand wringing about the future of print, of fiction and of literacy you can find it here, here and with rather more insight and optimism here.

We have been thinking more about the ways in which print and digital can interact. And it really is a matter of interaction. This is not a market in which digital will simply replace print and paper. Publishers, booksellers and retailers really need to think long and hard about the immense advantages of working with a medium in which print sales can be used to help digital sales, and vice versa. Having a physical bookstore or news kiosk on the street is potentially a great way in which to leverage digital sales. Having a virtual bookstore or kiosque is an amazingly good way in which to leverage sales of the printed book or to garner more print subscriptions for a magazine. Getting the two media flows to work together is the biggest challenge that we face.

Kindle Reviewed

There was an amusing, very watchable, but unkind review of the Amazon eBook reader by Robert Scoble


The review is rude and harsh to unfair (and Scoble admits as much), partly because he doesn’t dwell on the good/interesting points. He does however say that he read two books on the device, serious books (at least one of them was since it was by Greenspan). That seems to me an important plus for the Kindle. He had lots to complain about but he read two books.

A more thoughtful review comes from Ars Technica. John Timmer gives a convincing account of what it is like to use the Kindle and he introduces the fruitful concept of a ‘Reading Model’ (different media influence how the text they contain gets read in different ways). You should read the whole piece but you will get the flavour of the discussion from this:

I’ll leave it to you to ponder the reading models of newspapers and magazines in order to focus on the Kindle’s reading model, which is largely enforced by a combination of the E Ink screen and the underlying operating system. Like a book, the Kindle enforces arbitrary page contents based on what can be rendered in a single screen, and is read left-to-right.
There are only a couple of cases where books probably won’t work well. One is with books that feature heavy use of illustrations or pictures, as not all images display well on the E Ink screen. The delays involved in flipping long distances forward or backwards page-by-page means that books without a good chapter structure or readers that constantly shuffle around their book will have problems with the Kindle’s reading model. Otherwise, Amazon clearly has the book thing down. [Ars Technica]

This review gives insight into what its like to read with the Kindle. Its very helpful that John Timmer has tried to define the style of reading to which this machine lends itself (he guesses that commuters will like it). We have the impression that the pagination of a book on the Amazon Kindle is not the same as the pagination in print (clearly the newspapers and magazines are ‘repurposed’ and lose their print pagination). That is a pity. But the Kindle may have a more promising second coming when the engineers have absorbed Scoble’s usability strictures. In mid 2008 it will probably look and feel a bit more like the iTouch/iPhone!