The Repository of Primary Sources has been running since 1995 at the University of Idaho. Under the wing of Terry Abraham, it lists “over 5000 websites describing holdings of manuscripts, archives, rare books, historical photographs, and other primary sources for the research scholar”, and “[all] links have been tested for correctness and appropriateness”.
So what has this to do with the evolution of the book? Well, in the world of book publishing, whose job has it been to make sure that a book is known about and can be found — not only on publication but after? Marketing, Promotion and Publicity, undoubtedly, but they would be among the first to shout if Editorial or someone had not registered the book’s metadata with Bowker or the equivalent local ISBN registry.
According to Google, there are 129,864,880 books in the entire world (as of 5 August 2010, 8:26AM), but that is a semi-statistical estimate for the modern era drawn from sources such as ISBN registrars and OCLC’s WorldCat. Bookfinder/JustBooks, launched in 1997 by Anirvan Chatterjee, claims that through its network, it searches over 150 million books for sale. With the great hoohah over Hugh Howey’s Amazonian extrapolation, we can safely assume that there are many, many more books out there probably without ISBNs, which after all only came into effect in the 1970s and, even so, now there are vociferous opponents to the ISBN calling it an offline anachronism.
There is no question to beg about the usefulness of metadata. So is there a Terry Abraham and cohort out there to whom publishers and self-publishing authors can turn to deposit metadata whose links will be “tested for correctness and appropriateness”? Of course, that begs the question of whether there should be someone or organizations out there to perform that function. Why not leave it to the power of the Internet or the power of the market? Even if a book goes unnoticed or after a time becomes an “orphan work“, the power has spoken.
Let’s leave the power politics for another bookmark. Whoever performs the function, what exactly is it? Let’s call it the “findability” function. It goes beyond the usual social media marketing of a book or ebook that most publishers assign to Marketing. It goes beyond the usual search engine optimization (SEO), although it is arguably a part of it.
It goes to making the book as locatable an object as it can be, endowing it with “ambient findability.” See Peter Morville’s book of that title and judge for yourself whether “endowing something with ambient findability” misconstrues what he is saying or how the Web works. Nevertheless, …
Superfluous as they are claimed to be becoming, should publishers leave findability to the ISBN registries and librarians (until they become superfluous as well) or to the technorati?
As the book evolves, this “findability” function currently falls between the stools of Commissioning (where the editor discovers the author and pumps him or her not only for the ms but for connections leading to sales/marketing opportunities and further editorial opportunities), Editorial/Production (where the copyeditor, designer and production editor ensure that metadata is assigned and link-checks are run and the work is registered with the Library of Congress), Sales/Marketing (where marketeers scour the author’s questionnaire if it has arrived, create lists of mailing and emailing lists, compile the list of offline and online reviewers/bloggers and design the social media campaign and where a sales account manager with responsiblity for Amazon and other online accounts worries whether IT has included the work in the scheduled ONIX, EDI and customized catalog feeds) and Operations/Finance (where an accountant, analyst or inventory controller assigns the ISBN usually upon receipt of contract approval).
So if you are self-publishing or publishing books/ebooks, who attends to the ambient findability of what you are publishing? As more and more books go online, isn’t this part of the new craft and art of the book?
By the way, I found Morville’s book one rainy Saturday afternoon while shelving books at the local Oxfam bookstore. I bought it instead of shelving it.
I mean the sesquicentenary of the premature announcement of the death of the book and such of its hangers-on as authors, readers and libraries. I suppose I should be satisfied to have seen its centenary. Robert Coover’s essay in the New York Times (June 1992) marked it a bit early, echoing Louis Octave Uzanne‘s tongue-in-cheek knelling in Scribner’s Magazine (August 1894), right down to the same title – “The End of Books”:
I do not believe (and the progress of electricity and modern mechanism [the phonograph] forbids me to believe) that Gutenberg’s invention can do otherwise than sooner or later fall into desuetude as a means of current interpretation of our mental products.
For Coover, not so tongue in cheek, it was hypertext’s divergent, interactive and polyvocal routes as opposed to the book’s unidirectional page-turning that heralded the death of the book (and the author). D. T. Max rang out against CD-ROMs and the Internet bang on time in 1994 with “The End of the Book?” in The Atlantic when it was still called The Atlantic Monthly:
… the question may not be whether, given enough time, CD-ROMS and the Internet can replace books, but whether they should. Ours is a culture that has made a fetish of impermanence. Paperbacks disintegrate, Polaroids fade, video images wear out. Perhaps the first novel ever written specifically to be read on a computer and to take advantage of the concept of hypertext … was Rob Swigart’s Portal, published in 1986 and designed for the Apple Macintosh, among other computers of its day. … Over time people threw out their old computers (fewer and fewer new programs could be run on them), and so Portal became for the most part unreadable. A similar fate will befall literary works of the future if they are committed not to paper but to transitional technology like diskettes, CD-ROMS, and Unix tapes–candidates, with eight-track tapes, Betamax, and the Apple Macintosh, for rapid obscurity. “It’s not clear, with fifty incompatible standards around, what will survive,” says Ted Nelson, the computer pioneer, who has grown disenchanted with the forces commercializing the Internet. “The so-called information age is really the age of information lost.” … In a graphic dramatization of this mad dash to obsolescence, in 1992 the author William Gibson, who coined the term “cyberspace,” created an autobiographical story on computer disc called “Agrippa.” “Agrippa” is encoded to erase itself entirely as the purchaser plays the story. Only thirty-five copies were printed, and those who bought it left it intact. One copy was somehow pirated and sent out onto the Internet, where anyone could copy it. Many users did, but who and where is not consistently indexed, nor are the copies permanent–the Internet is anarchic. “The original disc is already almost obsolete on Macintoshes,” says Kevin Begos, the publisher of “Agrippa.” “Within four or five years it will get very hard to find a machine that will run it.” Collectors will soon find Gibson’s story gone before they can destroy it themselves.
Best not to wait for that sequicentenary then. Accommodatingly in 2012,David A. Bell and Leah Price rolled out the canon more with Google, ebooks and the Kindle tolling not merely for the print book but rather for the New York Public Library and all libraries. We even had screenings throughout 2013 and scheduled for January 2014 of the documentary Out of Print, which asks, “Is the book as we know it really dead? Is the question even important in an always-on, digital world?”
The nearer one stands, of course, the louder it is.
Sounded in the nineties but not obviously well heard, Paul Duguid, he of The Social Life of Information co-fame with John Seely Brown, advised “taking a breath”:
… it’s important to resist announcements of the death of the book or the more general insistence that the present has swept away the past or that new technologies have superseded the old. To refuse to accept such claims is not, however, to deny that we are living through important cultural or technological changes. Rather, it’s to insist that to assess the significance of these changes and to build the resources to negotiate them, we need specific analysis not sweeping dismissals.
… to offer serious alternatives to the book, we need first to understand and even to replicate aspects of its social and material complexity. Indeed, for a while yet, it will probably be much more productive to go by the book than to go on insistently but ineffectually repeating “good bye”.
So it is heartening (or depressing if you are a Jeremiah) to see 2013 rung out with an essay by Roger Schonfeld (ITHAKA S+R) that celebrates and encourages the specific analysis Duguid urged. In “Stop the Presses: Is the monograph headed for an e-only future?”, Schonfeld suggests several directions for further research and design:
What are the perceived constraints of existing digital interfaces with respect to long-form reading of scholarly monographs? What functional requirements does print currently serve better than digital with respect to monographs, even recognizing that many of the same individuals are acquiring and using tablets and reader devices for other purposes? How can content platforms and publishers better address the needs of academic readers and other users?
In an environment that has in many ways grown more fragmented over time, how can libraries and content platforms ensure the most efficient discovery and access experience possible for users of scholarly monographs? Is there a place for serendipity?
How can stewards of primary source materials in tangible and digital form, such as archives, museums, and digital libraries, most effectively support the digitization of their own materials for discovery and access purposes and provide for rich linkages with the analysis of their holdings found in the scholarly monograph?
If greater opportunities are provided over time for readers to engage with the primary sources, how might authors respond to reshape the nature of the monograph?
Will the digital version of the scholarly monograph diverge from the print version as additional features can be added?
At the heart of what changes but remains in the shift from print to digital are Search and Usability or “ambient findability” as Peter Morville terms them. Morville’s seminal work on information architecture, search and user experience focuses on the Web but is equally applicable to the book and ebook. A superior e-monograph will enlighten its readers by the author’s choice of information architecture and its enabling them to learn and evaluate the search paths that lead to the presentation, the arguments and the primary sources. Likewise the superior print monograph achieves its goals by the judicious combination of preliminaries, Part, Chapter, endmatter and thousands of years’ development of paratextual apparatus.
Of the print apparatus for search and usability, the table of contents and other parts of the printed book’s preliminaries may not remain a useful point of entry to a scholarly ebook. In 2002, when a small team at McGraw-Hill working with Unbound Medicine decided that putting the index at the front of HarrisonsOnHand in place of the table of contents made more sense for the user of an HP iPAQ, they thought they had made a major breakthrough for mobile ebooks. Almost. What they were realizing is the centrality of those twin navigational stars, Search and Usability.
Only a little over a decade later, the insight continues to dawn, and with the intervening improvements in interfaces and devices, it may be much brighter this time.
The process of digitizing a printed book involves much more than the conversion of ink on paper to bits in a file. Functional aspects of the book must be mapped to digital equivalents. Thus we have tables of contents and indices turning into hyperlinks and spine files, page numbers that beget location anchors and progress indicators.
So wrote Eric Hellman earlier this year in “Anachronisms and Dysfunctions of eBook Front and Back Matter” and concluded that the title page in an ebook ought to be a “Start” page like the start screens in the old interactive CD ROMs or today’s DVDs of television series. Publishers such as Faber with T.S. Eliot’s The Waste Land or Moonbot Studios with The Fantastic Flying Books of Mr. Morris Lessmore have done just that.
Although the EPUB doyen and doyenne, Richard Pipe and Liz Castro, advised usability-driven rethinking of frontmatter, the practice is not widespread among purveyors of the less-than-enhanced ebook. Most editorial and design advisors such as Joel Friedlander only go so far. Their advice generally assumes the direct transfer of print frontmatter to the ebook. While allowing for the omission of spatial anachronisms like the bastard or half title, they only caution against overburdening the ebook’s frontmatter. As for the traditional index at the other end of the ebook, many publishers omit them or simply replicate the print version without links. Ebook indexes that link terms to their multiple locations in the text regardless of the flow of the text in the ereader or device are rare for obvious technical and financial reasons, and only this year was an EPUB specification for the index approved.
The two great affordances of the printed book that most challenge today’s ebooks and ereaders, however, are legibility and the page. While screen legibility may be improving at a “blinding rate”, we have today little more specific, scientific analysis of screen vs print legibility than Ellen Lupton found in 2003, although Jakob Nielsen remains indefatigable on the subject. Mechanics aside, the debate over the efficacy of reading from the page vs that from the screen should always be kept in mind. Ferris Jabr‘s April 2013 article in Scientific American and the six months of responses to it helped the topic considerably. Jabr concluded, “When it comes to intensively reading long pieces of plain text, paper and ink may still have the advantage. But text is not the only way to read.” Which harks back to the conclusion of a previous post in Books on Books and Jerome Bruner’s apt observation of Lev Vygotsky’s fondness for Sir Francis Bacon’s epigram, “Nec manus, nisi intellectus, sibi permissus, multum valent” (Neither hand nor intellect left each to itself is worth much)” (247). Perhaps for now neither print nor digital left each to itself is sufficient.
How the page matters. Enough so for Bonnie Mak to make it the subject and title of her book and to join Johanna Drucker, Peter Stoicheff, Jerome McGann and a long list of scholars conducting the analysis Duguid urged. As the August 2013 Ploughsharesinterview with her illustrates, Mak’s focus and interest on the material aspects of the page and book extends also to the library and performance art. Which brings us back to Drucker the book artist, who argues that instead of considering the page, table of contents, etc., as static, iconographic features of format, we should think of them as cognitive cues in an instruction set in the “program” of the codex. With reflowable text and responsive design, though, the cues can become slippery, so much so that the EPUB standard makers introduced Fixed Layout Properties with EPUB3.
This line of thinking about print space vs e-space comes sharply into focus if we consider annotatability, another of the printed book’s apparently superior affordances. While various devices and ereaders offer the ability to highlight and annotate, not all do, and the annotations are rarely accessible to others or across devices and platforms. The Web and ebook standards communities are hard at work on a specification for open annotation, which will enable the reader to share annotations of a work with other readers and enable annotations upon annotations. While we wait for the standards, though, the market spawns numerous solutions such as Readmill and SocialBookthat functionally reflect “the conceptual and intellectual motivations” behind the affordance.
These experiments and successes exemplify the specifics Duguid urged. The big print-to-digital experiment of the last decade, however, that would by any measure be deemed to have exceeded expectations is the Google Book Project. Whether it was conducted in any sense “by the book” has been extensively argued in the courts and wherever else publishers, authors, technophobes and technophiles tend to gather. The year saw the dismissal of the Authors’ Guild case against Google, which left everyone just to carry on behind the scenes as they had been. So we are left with both the occasion for further bell-tolling for the book and further Duguidian exploration and experimentation as well as the avenues of research suggested by Schonfeld.
There is, however, one more change to ring at the close of 2013. The metaLABproject pulls a bit on that rope, but Kenneth Goldsmith grasps it firmly and echoes Michael Agresta‘s earlier insights into the many web-to-print phenomena that demonstrate that these two technologies may be forever intertwined. Goldsmith’s “The Artful Accidents of Google Books” highlights several individuals’ obsession with scanning errors from the Google Book Project. One of them is Paul Soulellis, the proprietor of the Library of the Printed Web, which “consists entirely of stuff pulled off the Web and bound into paper books”.
Soulellis calls the Library of the Printed Web “an accumulation of accumulations,” much of it printed on demand. In fact, he says that “I could sell the Library of the Printed Web and then order it again and have it delivered to me in a matter of days.” A few years ago, such books would never have been possible. The book is far from dead: it’s returning in forms that few could ever have imagined.
Or imagined digesting, like the series of book art by the late Dieter Roth, Literaturwurst (1969), to which Agresta gloomily alludes as “a final possible future for the paper book in the age of digital proliferation”.