Archive for the ‘Digital archiving’ Category

Digital Public Library of America (DPLA) — A Brief Update

July 4, 2015

Since the launch of the DPLA in April 2013, the staff under the direction of its director, Dan Cohen, have been pursuing various projects to determine best ways to develop this resource/tool further and broaden its serviceability. In an April 2015 whitepaper, “Using Large Digital Collections in Education: Meeting the Needs of Teachers and Students” authors Franky Abbott and Dan Cohen set forth one set of plans for making the DPLA valuable in K through 16 settings. The plans resulted from research supported by the Whiting Foundation and yielded a program that enlists the help of educators through another initiative funded by Whiting. The following 15 June 2015 “Call for Educators” on DPLA’s blog describes the kind of partnering with educators that DPLA is seeking to undertake:

The Digital Public Library of America is looking for excellent educators for its new Education Advisory Committee. We recently announced a new grant from the Whiting Foundation that funds the creation of new primary source-based education resources for student use with teacher guidance.

We are currently recruiting a small group of enthusiastic humanities educators in grades 6-14* to collaborate with us on this project. Members of this group will:
•build and review primary source sets (curated collections of primary sources about people, places, events, or ideas) and related teacher guides
•give feedback on the tools students and teachers will use to generate their own sets on DPLA’s website
•help DPLA develop and revise its strategy for education resource development and promotion in 2015-2016

If selected, participants are committing to:
•attend a 2-day in-person meeting on July 29-July, 30 2015 (arriving the night of July 28) in Boston, Massachusetts
•attend three virtual meetings (September 2015, November 2015, and January 2016)
•attend a 2-day in-person meeting in March 2016 in Boston, Massachusetts (dates to be selected in consultation with participants)

Participants will receive a $1,500 stipend for participation as well as full reimbursement for travel costs.

DPLA has also been receiving significant funding from additional sources for other efforts–including funding its “hubs,” both its content ones (“large libraries, museums, archives, or other digital repositories that maintain a one-to-one relationship with the DPLA and assist in providing and maintaining metadata for content”) and its service ones (“state, regional, or other collaborations that host, aggregate, or otherwise bring together digital objects from libraries, archives, museums, and other cultural heritage institutions”). In a big boost to its hub development, the DPLA has recently received $1.9 million from the Alfred P. Sloan Foundation and $1.5 million from the John S. and James L. Knight Foundation); it will use this support to advance their efforts in “connecting online collections from coast to coast by 2017” (“Digital Public Library of America makes push to serve all 50 states by 2017.”)

Book History and Digital Humanities: SHARP at #MLA 14 #s738

January 27, 2014

The recent MLA 2014 conference featured numerous sessions dealing with digital humanities in its various incarnations. More than a few of those sessions dealt with the interrelationships between new and old technologies, including Session 738, a stimulating roundtable sponsored by the Society for the History of Authorship, Reading & Publishing (SHARP) and organized by Lise Jalliant (University of Newcastle). Unfortunately, Lise was not able to attend MLA as planned, so Eleanor Shevlin served as chair in her stead.

Designed to “shed light on the digital future of book history and the bibliographical roots of digital humanities” (MLA special session proposal), the “Book History and Digital Humanities” roundtable featured six projects that attest to the close interrelationships between the two fields. The presentations were delivered in the chronological order of the projects. Not only did these projects illustrate the ways in which the digital and book historical are tightly intertwined, but they also demonstrated various technological advances as they highlighted what a new generation of digital capabilities and thinking are affording scholarship.

Greg Hickman, head of the University of Iowa’s Special Collections and Archives, opened the session by discussing the Atlas of Early Printing, an interactive map that provides a visualization of printing’s spread during the incunabula period. The 2013 version Greg demonstrated offers a technological advance over the map’s flash-based design launched in 2008 and has been primed to operate effectively on mobile devices as well as desktops.

Atlas of Early Printing

Atlas of Early Printing


Unlike the two-dimensional print maps from which it draws its inspiration, the Atlas contains information related to the spread of print such as the locations of paper mills, universities, trade routes. Users can select all or any of this additional information to create specific contextualizations about the ways the press and printing took hold throughout Europe in the decades leading up to the sixteenth century.

Interested in using technology for purposes beyond gathering, organizing, and explaining information, Michael Gavin, a professor of English at the University of South Carolina, discussed using computer simulation to create a more generative way of working with information. Specifically, Gavin, drawing from Joshua Epstein’s work in agent-based computational simulation to model early modern print culture and to “grow information” about seventeenth- and eighteenth-century book trade issues including censorship and the effects readers exercised on printers and booksellers. The use of such computer modeling focuses on simulating social behavior to generate and test information; if the model is right, then it should not crash.

The director of NINES and professor of English at University of Virginia, Andrew Stauffer, made a cogent plea for the imperiled status of nineteenth-century printed books. Individual copies of nineteenth-century books, often still in the stacks or in the process of being de-accessioned (if not already removed), possess rich, layered histories and the evidence of their multiple temporalities. In an effort to preserve the histories of these works “hidden in plain sight,” In addition to advocating for the primacy of the printed work as a site embodying distinct, irreplaceable data, Stauffer is developing a crowd-sourcing project that will ask academic institutions, other holding bodies and individuals to use Instagram and other forms of technology to capture digitally this heritage and make it accessible.

Matthew Laven, the Associate Program Coordinator of the Mellon-funded “Cross Boundaries: Re-envisioning the Humanities for the 21st Century” at St. Lawrence University, addressed the question “What is a digital bibliography of a book?” through his work on a dynamic, visually-enriched publishing history of Willa Cather’s Death Comes to the Archbishop (1927) for the Willa Cather Archive. Acting as a case study for the digital representations of both various material artifacts (e.g., manuscripts, printed translations, unusual editions) and textual variances, the project also seeks to convey the bibliographical ties among the various artifacts and is informed by a Functional Requirements for Bibliographic Records (FRBR)-based ontology.

Hannah McGregor, a SSHRC postdoctoral fellow at the University of Alberta, spoke about constructing an innovative methodological approach to studying periodicals that she and Paul Hjartarson, professor of English and film studies at the University of Alberta, have been developing in collaboration with the Editing Modernism in Canada research group. A key working hypothesis of this project is that periodicals are ideally situated for digital remediation as relational databases because they themselves resemble databases (that the word “magazine” also meant a storehouse bespeaks this similarity). While middlebrow magazines serve as the project’s focal point, McGregor drew her examples from the Western Home Monthly and Pictorial Review. The issue of labeling—what to call different items, the problem of categories and categorization—has been a vexed point and one no doubt complicated by the multiplicities of genres and the nature of periodical materials (think of the Burney 17th and 18th Century Newspaper Collection). This issue of labeling underscored the ways in which coding is important intellectual labor.

The final participant, Elizabeth Wilson-Gordon, professor of English at King’s University College in Alberta, presented the Modernist Archives Publishing Project (MAPP). A collaborative effort involving Canadian, U.K. and U.S., institutions, the project seeks to advance research in the history of modernist presses and publishing. Wilson-Gordon used Virginia Woolf’s Hogarth Press to illustrate the capabilities of MAPP. The Hogarth Press offered an especially rich example because of the insights its history affords about Woolf and her work but also because of its importance to interwar publishing and its longevity throughout the twentieth century. Like many of the other projects discussed, MAPP illustrated the importance of collaboration and communities of scholars working in tandem. The launch of the Hogarth Press open-access portion of MAPP is slated for 2017.

The Book History and Digital Humanities session was one of three excellent panels sponsored by SHARP. SHARP’s liaison to MLA, Greg Barnhisel has written a full account of the other two, equally invigorating sessions for the spring issue of SHARP News: the official SHARP panel, Session # 501 Books and the Law, and Session #398 Virginia Woolf and Book History, co-sponsored with the Virginia Woolf Society.

Preserving Digital Archives

April 28, 2013

Most attendees at the Beinecke Library’s recent conference on digital archiving–Beyond the Text: Literary Archives in the 21st Century“–arrived equipped with the idea that there is no preservation without loss.

What may have given some attendees pause, particularly those who work primarily on the first two centuries following the Reformation, is how much 21st-century digital stuff is being preserved–and how idiosyncratic the process of selection can be.

Faced with the data deluge of a contemporary literary figure’s electronic correspondence, for example, how do archivists determine what gets archived and what gets tossed?  Now that archiving can begin during a writer’s or publisher’s lifetime, without a family member’s interference (think Cassandra Austen), who shapes the archive?  And if digital archivists shape the archive, what principles of retention do they use?  Where do their loyalties lie? With the author?  Or with the data-hungry and feverishly scandal-mongering scholars of posterity?

The two-day conference raised unresolved and provocative questions, many of which focused on the problem of selection.  Fran Baker, the Assistant Archivist for John Rylands Library at the University of Manchester, discussed the complexity of archiving the Carcanet editorial papers, including email.  Hearing about the decision-making process determining what stays and what gets tossed may not seem new to librarians familiar with the problem of sorting and discarding, but in the context of shaping an archive, that decision-making process and its likelihood of error takes on urgency.

There were stories of forensic success, the most notable of which is Matthew Kirschenbaum’s narrative of the extensive and collective effort tracking down William Gibson’s electronic poem, “Agrippa,” which was designed to encrypt itself after a single reading.  That a text programmed to go away can be recovered suggests both the value of collaborating on large digital projects like The Agrippa Files and the perils of assuming that an author has control over her or his electronic archives.  Similarly, Beth Luey’s account of the rich storehouse of data contained in publishers’ records–sales data, copies printed, copies sold, print runs, design decisions, contracts, marketing files, legal disputes, reviews, book jacket design, subsidiary rights, and so forth–both encouraged work on publishers’ records and raised ethical and legal issues.  In the discussion that followed, for example, it became clear that though some publishers did not retain rejected manuscripts, others did, including pertinent correspondence and readers’ reports.

The Keynote talk by David Sutton noted that literary manuscripts are like no other manuscripts in that they offer insights into the act of creation.  He showcased ongoing projects that promote an awareness of digital literary archives:

Hazel Carby’s eloquent, harrowing, and culturally resonant account of tracing her family genealogy back to a slave owner’s carefully archived records, reminded everyone that archives preserve both the beautiful and the monstrous.

Diane Ducharme drew on her experience at the Beinecke to warn that however much we may desire an unmediated past and a pristine archival order free from editing and explicating, all archives arrive shaped and selected.  Her discussion underscored the importance of searching for the traces of a previous archivist’s work.

Micki McGee described her experience with the Yaddo Archive Project, which aims at providing visualizations of the social network of writers who worked at Yaddo.  She described the process of seeking a relational database with social network mapping and a visualization widget.  Though the project, Yaddo Circles, requires authentication and is not yet available for public view, this vimeo provides an overview.  Clicking here reveals the kind of relational visualization this project might produce.

McGee also recommended looking at the following projects:

These projects have potential for helping us recover the intensely sociable and highly competitive literary worlds of the long eighteenth century.   Like the many other provocative and interesting papers and introductions to sessions, they point a way forward even as they raise methodological, logistical, and even ethical questions.

This conference made clear the value of a longer conference, with sessions focusing on specific problems posed by digital archives of material both old and new.  I welcome contributions by others who attended the conference to help complete this cursory overview.