The Case for Books on NPR (Monday, Nov. 23rd)

by

Robert Darnton will discuss his The Case for Books on the Diane Rehm show (NPR, WAMU station) Monday, November 23rd, from 11 am to 12 noon (EST). While one can listen to the show in real time, the full archived version will be available on the show’s website (and then in its archives) about an hour after the program has aired.

Anna has provided a chapter-by-chapter synopsis of Darnton’s book in series of comments for a previous emob post, The Digital Revolution and the Scholar: Darnton’s View.

12 Responses to “The Case for Books on NPR (Monday, Nov. 23rd)”

  1. Anna Battigelli Says:

    Diane Rehm’s interview with Robert Darnton is exactly the kind of discussion that should be going on in order to educate the public about the digital future and the future of books. Darnton’s point that we need to teach students to read slowly was an important one.

    I’m particularly interested in how one medium affects another, and, in particular, how the presence of digital technology might reshape scholarly texts. It might be the case that the pyramid structure Darnton describes as his view of digital publishing might be more useful to scholars than the codex with its footnotes. I do not at all mean to disparage scholarly books; every scholar reads books with pleasure. But if digital technology allows for richer citations and many more kinds of citations, including film clips and radio, then maybe scholarship should logically migrate to it, leaving printed texts for projects with a greater interest in artfully linear narratives.

    Like

  2. Eleanor Shevlin Says:

    Yes, I am always glad to see discussions such as this interview with Danrton taking place. I will say that Diane Rehm tends to host such guests, and I also think she is very fine interviewer.

    While I certainly think scholars could take advantage of new formats for their work, I also think one needs to assess what medium (or media enhancements) best suits one’s project. Darnton’s discussion of the book he will be publishing next year that will include songs, for example, seems to make sense and be a good use of technology. His experimental article, “An Early Information Society: News and the Media in Eighteenth-Century Paris”, with its E-enhancements, for the American Historical Review exemplifies a purpose and topic well-served by new technologies. Scott Casper et al.’s Perspectives on American Book History, with its CD-Rom of numerous digitized artifacts and primary documents, offers another example. In some cases, like Ohio University Press’s recent distribution of four 19th-century monographs as free PDFs, technology can enhance access (though in the Ohio University Press’s case the rationale seemed to be marketing–I just realized that Ohio State is also where NetLibrary got its start, so the Ohio University Press’s experiment may have some relationship to NetLibrary’s parent organization–hadn’t realized that connection before.) If only an electronic file is being offered, costs would probably be lower. However, in many cases, I would prefer to buy the book than print my own copy, and for still others, I would rather be given access to a website for the “E-enhancements” to supplement or complement the printed book.

    Like

  3. Anna Battigelli Says:

    Like almost everyone else, including Bill Gates, I agree that digital surrogates cannot match the ease and pleasure of reading a printed text. It would be helpful in this scenario of electronic scholarly publishing to have print-on-demand as an option.

    Additionally, consigning scholarship exclusively to the internet raises concerns for guarding the scholarly archive. As technologies change, would entire collections of scholarship be lost? Darnton suggested a similar concern for the new information landscape when he emphasized the importance of “getting it right.” A lot is at stake.

    Finally, I liked his idea of opening up Harvard’s library through digitization projects. This would protect Harvard’s collections while also providing access. I’d like to hear more about that project.

    Like

  4. Eleanor Shevlin Says:

    POD is appealing, and evidently Harvard’s bookstore has already established the means for providing this service.

    Although almost three years old now, the American Association of University Presses’ Statement on Open Access offers some points to ponder.

    For one, it seems useful to consider the dicton used to describe some forms of open access: the shift from a “a market economy” to a “gift economy” (or “subsidy economy”). The term”gift” evokes many positive associations, but it also raises questions about finances and the source of funding for the “gift-givers.”

    Secondly, the issue of inequities:

    BOAI-type open access will require large contributions from either the authors or other sources (including foundations and libraries, which pay “member” fees instead of paying for subscriptions). Scholars at less wealthy institutions or those with no institutional affiliations may experience greater difficulty in publishing unless fees are waived or reduced (a process that will increase the burden on other authors, who will have to pay higher fees to offset the waivers). (p. 3)

    Third, there is the issue of faculty (and university) productivity:

    Finally, if faculty are asked themselves to become publishers, they will spend more of their time performing tasks for which they are not trained and less on the teaching and research for which they are, resulting in an overall loss in economic efficiency for the university as a whole. ( p. 4)

    Fourth (and something that POD machine such as the Espresso Book Machine might solve [cost: USD $97,500 plus printer. The printers range in price from about $4,000 to $25,000]), the issue of cost savings when the production shifts to individual users:

    many end users will prefer to print out what they want to read, especially longer articles and books, using printing devices that are less economical than dedicated printing presses.

    The statement contains other points worth considering as well.

    Like

  5. Eleanor Shevlin Says:

    The latest issue of The New York Review (December 17, 2009) features Robert Darnton’s commentary on the Google Settlement, “Google and the New Digital Future.”

    The piece opens by reviewing some of the momentous historical events that occurred on November 9ths, the original extended date for the revised Google settlement to be filed in the district court for the Southern District of New York. As Darnton notes, an additional extension moved the date once more, this time to November 13, 2009, “a less auspicious date” (82). Although nothing happened ultimately on the 13th save for the actual filing (significant as a marker in the hard bargaining over the emerging digital landscape), Darnton explains what is at stake in the long run: “Who ultimately wins is not simply a matter of competition among potential entrepreneurs but an issue of enormous importance to whoever cares about books, even though the public is reduced to the role of a spectator” (82).

    Darnton then turns his attention to opponents of the settlement, focusing at length on the arguments put forth by France (that emphasized the “unique character of the book”) and Germany (that emphasized the “right of privacy”) Having the same legal counsel, France and Germany put forth virtually identical secondary arguments that ranged from tackling the original settlement’s according “Google a virtual monopoly over orphan works” to critiquing the effects of the “most-favored nation clause” to objecting to the secrecy surrounding both pricing and auditing issues (83).

    Next Darnton addresses the Department of Justice’s response, in which it “acknowledge[s] [Google Book Search’s] potential to promote the public good” (83). He characterizes the DOJ’s settlement as presenting “a way to save the settlement” (83). The DOJ’s recommendations focused on the most contested elements of the original settlement, and it seems as if Google closely followed its suggestions (83-84). Darnton closes this section by analyzing these changes.

    While noting that Google’s revisions are insufficient in the eyes of its critics and that the future is unpredictable, Darnton concludes his essay by proposing two feasible solutions. The first, admittedly quite ambitious solution would be to “transform Google digital database into a truly public library,” a move that would “require an act of Congress” and would “make a decisive break with American habit of determining public issues by private lawsuit” (84). How Google would respond to such a proposal, as Darnton observes, is uncertain.

    The second possibility he offers involves calling upon a nonprofit entity (possibly the Internet Archive) to undertake the “digitizing, open-access distribution, and preservation of orphan works” (84). Funding would be provided by foundations, and economic stimulus funds could perhaps help finance the digitizing. This digital library would consist of all out-of-copyright and orphan works; new titles could be added as their copyright protection expired. Darnton suggests a timeline of a decade, “at the rate of million books a year” (84).

    In his final paragraph, he posits that the public debate generated by the Google settlement affords the opportunity to “enrich [the nation’s] culture” (84).

    As appealing as these two solutions may each be, I wonder how feasible either solution is?

    Like

  6. Anna Battigelli Says:

    Perhaps these suggestions are not feasible, but they are so appealing that it would seem madness not to propose them. Given how rapidly the digital world is approaching, now seems the time put forward all possibilities. Darnton sees the historic momentousness of the digital new world.

    Like

  7. Eleanor Shevlin Says:

    I certainly understand why Darnton would offer these two solutions, and I didn’t mean to suggest otherwise. The first seems highly original in its boldness. Yet, I think convincing Google to sell (Darnton observes that Google would need to be compensated) its database of digital books to the U.S. government would be extremely difficult unless the settlement took a drastic turn and/or more legal obstacles arose that became just too onerous for Google. That the DOJ seemed to be seeking a way to “save the settlement” makes the first possibility seem unlikely. I also wonder if Congress would see the value in helping to finance such a national library, especially given the present economic woes facing the country.

    The second solution seems more within reach in some ways. In fact, efforts to create a digital universal library predate Google Books, having started as the Universal Digital Library, with its Million Books Project (an effort to digitize a million books, a goal that has already been achieved.) This project has received funding from the National Science Foundation, the US government, and other sources. A participant in the Million Books Project, the Internet Archive was founded in 1996 to archive the web and to preserve existing digital material. Its Open Library is a project funded in part by the California State Library (and we know the current fiscal situation of California as a state). However, Google quickly surpassed these other efforts. Because Google has eclipsed these projects, despite their advance start, I wonder what could be done to spur them on and to enable them to be a solution. To answer my question, it would seem that we would need leading academic voices to join with Darnton and for these leaders to connect with leaders of industry who are willing to support this effort philanthropically.

    Like

  8. Anna Battigelli Says:

    I agree: Google is the key player in such a project.
    I also like your proposal that leaders within academia
    need to be joined by leaders within industry willing to
    support this digital library.

    In addition to all the other questions such a project would raise, I wonder whether the inevitable evolution of technology might one day leave digitally scanned text inaccessible. For such a project as the one you outline to gather support, would we not need some certainty that digitized material will survive technological evolution rather than be rendered inaccessible? I’m thinking of the problem of microfilm readers: as libraries reduce their number of readers, the case for retaining entire microfilm collections diminishes, at least in the minds of librarians focusing solely on current “use” rather than value.

    Perhaps this is a naive question, but do we know that digital texts are substantially more long-lasting than, say, microfilm? These discussions remind us of the sturdiness of the printed book.

    Like

  9. Eleanor Shevlin Says:

    Anna, excellent that you raise questions about the various longevities of these different media.

    Archivists and conservators are understandably quite concerned about such issues. Microfilm, when high quality and stored under optimal conditions, has a presumed lifespan of 500 years (see, for example, this 2007 report from the Northeast Document Conservation Center). It also can be read by the naked eye.

    In 2006 archival quality CDs (containing 24K gold) were developed that have a reported lifespan of up to 300 years, and DVDs of this quality, 100 years. (I have also seen very recent reports that suggest a CD has been developed that will last up to 500 years when stored in optimal conditions.) Pressed CDs and DVDs have a longer longevity than those that are rewritable/recordable. Yet, the development of these “gold” disks occurred only three or four years ago. What about the material “preserved” in these formats before then? And was there sufficient concern early on about the life expectancy of this media? A 2007 risk assessment report by the British Library suggests that the serious problems for preservation exist:

    Given the variety of disc manufacturers and recording environments used to create the Library’s recorded optical media, it is impossible to produce an accurate estimate of failure rates. It would require considerably broader sampling and more detailed analysis than has thus far been possible, along with more upfront testing to distinguish between disks that are faulty as received, and discs that have developed faults through deterioration. Assuming that the discs recorded have been at the lower end of the quality spectrum, and that the failures seen thus far are representative, an estimated failure rate of 3% per year would seem a reasonable worst-case. (23; see pp. 22-23 for discussion of optical media)

    Of similar relevance is the statement on JISC Media website (JISC is the organization that advises UK Higher Education and is the same entity that helped negotiate the distribution of/access to ECCO to all UK higher education institutions), An Introduction to Digital Preservation:

    It is interesting to note that due to technological obsolescence and media fragility many consider it possible that future generations will have less information about Gulf War conflicts (recorded on digital media) than the First World War (recorded on analogue media). The greatest asset of digital information – the ease with which it can be copied or transferred – is paralleled by the ease with which the information can be corrupted or deleted.

    As most of these discussions emphasize, two main issues are at stake when considering the preservation value of digital media: the physical lifespan (affected by storage conditions as well as the “natural” properties of the medium) and the longevity of the technology used to read the data.

    Although dated September 2005, the report
    “Your Data At Risk: Why you should be worried about preserving electronic records”
    , published by The National Council on Archives, is worth quoting in close:

    A piece of paper can last for centuries left alone in a dry, dark room. Nothing created by a computer has that kind of inherent longevity – nothing like it in fact. Computers and their contents only survive by the active and ongoing help of human beings. (2)

    I addressed some of these issues briefly (as well as the symbiotic relationship between print and digital media) in the introduction to Part III, “Agency, Technology, and the New Global Revolution,” of Agent of Change: Print Culture Studies after Elizabeth L. Eisenstein, collection of essays I coedited with Sabrina Baron and Eric Lindquist.

    Like

  10. Eleanor Shevlin Says:

    The latest issue of The New York Review of Books offers “Google & the Future Books: An Exchange” based on letters received in response to Darnton’s essay on this topic in the NYRB’s December 17th issue. One letter, signed by Paul Courant, Laine Farley, and six others representing libraries, the HathiTrust, and university information offices, argues that a consortium of libraries “are using Google-digitized volumes to create the ‘truly public library’ that [Darnton] seeks” (NYRB, 14 Jan. 2010, p. 64). Another letter, written by Theodore Koditschek (History, University of Missouri, Columbia) suggests, “If Google insists on retaining these holdings, it should be assigned the status of a public utility–like gas, water, or electricity–and the appropriate regulatory authority should be set up” (64). In his reply Darnton questions how truly public Hathi’s library is and provides a detailed counter-argument for why it is not. Darnton also welcomes Koditschek’s suggestion of setting up Google as a public utility and explores several variations of how such a configuration could work.

    Like

    • Anna Battigelli Says:

      This is an interesting exchange. Robert Darnton’s reply to one of the letters cites Geoff Nunberg’s post on Language Log, which makes a strong case for strengthening Google Book Search’s poor bibliographical metadata. It looks as if HathiTrust is attempting admirably to strengthen GBS’s bibliographical weaknesses, a good thing since such a project requires the very best librarians. But so far the more convincing arguments over the future of the GBS project–its possibilities and perils–are Darnton’s. I particularly admire his politely resolute insistence on attending to the public good.

      Like

  11. Eleanor Shevlin Says:

    The suggestion to treat Google Books as a public utility is a particularly intriguing notion on several fronts. While the rationale to treat GB as a public utility derives from a desire to regulate Google Books in the interest of the public, the notion also evokes the vital function of books/knowledge in society. Gas, electricity, water and the like are most associated with public utilities, though transportation can often also be regulated as such. These necessary resources, moreover, are often held by monopolies. Treating affordable access to books as akin to affordable acess to traditional public utilities does not seem far-fetched to me, but I wonder how the idea might strike the general public?

    Like

Leave a comment