Archive for the ‘NEH’ Category

Digital Humanities Summer Institute

November 20, 2015

This is just in from the Renaissance Society of America. –Anna

 

The Renaissance Society of America is pleased to announce that it will partner with the Digital Humanities Summer Institute (DHSI) in 2016, to offer five tuition scholarships (each for one week) to current RSA members who wish to attend the institute.

Additionally, all current RSA members will have the opportunity to register for one of the institute’s courses (one week) at a discounted rate.

The DHSI (dhsi.org) will be held on 6–10 June and 13–17 June 2016 at the University of Victoria, Canada. Participants may choose to attend one or two weeks of the institute. Each week will include a training workshop as well as a selection of colloquia, unconferences, panels, and institute lectures.

Tuition scholarships

Note: If you’re applying for a tuition scholarship, do not register for any course until after the RSA informs you of the result of the scholarship competition. This is because, in the event that you win a scholarship, DHSI cannot refund registrations.

Eligibility: Applicants must already be an RSA member in 2015, and if they win a scholarship, they must renew their membership in 2016.

Deadline: 30 November 2015

The committee will select two non-doctoral scholars, two junior scholars (including adjuncts and independent scholars), and one senior scholar (including adjuncts, independent scholars, and retired scholars).

The scholarship covers the cost of tuition only; transportation and lodging costs are the responsibility of the winner.

Application:

  1. Fill out a very brief form that asks for name, email address, mailing address, affiliation, academic status, and discipline.
  2. Submit documents by email (as attachments, to DHSIapp@rsa.org):
    • Resume (no more than two pages)
    • One-page letter indicating which DHSI course you propose to attend and how it meets your overarching research aims. Please also identify a second course choice, in the event that your first choice is unavailable.

Discounted registration rate for RSA members

Note: If you’re applying for an RSA tuition scholarship, do not register for any course until after the RSA informs you of the result of the scholarship competition. This is because, in the event that you win a scholarship, DHSI cannot refund registrations.

Before 1 April 2016, RSA members can register for either weeklong course at the discounted rate of $300 for students and $650 for nonstudents. To view a list of all forty-three courses, please go to dhsi.org. Because the most popular courses will fill before April, we recommend that you register in December or January, as soon as the results of the scholarship competition are known.

To register at the discounted rate, you must be a current RSA member (2015) and you must renew your membership for 2016.

Advertisement

2013 ODH Project Directors Meeting

September 23, 2013

The NEH has just announced its 2013 Office of Digital Humanities Meeting will take place on Friday, October 4, 2013, at NEH Headquarters in Washington, DC.

As in the past, the meeting will feature 3-minute Lightning-Round presentations from ODH grantees. This year thirty-two grant recipients from 2013 will be presenting–almost all of those who received a grant this year. EMOB will be reporting on these presentations in a subsequent Fall post. See an earlier post for reporting on past NEH awards.

In addition to these lightening rounds, Dr. Michael Witmore, Director of the Folger Shakespeare Library, will give one of two keynote addresses. His talk is titled “Adjacencies, Virtuous and Vicious, in the Digital Spaces of Libraries.”
Abstract: This talk will explore how techniques of discovery — scanning shelves, exploring digital texts and catalogues — may change the nature of research conducted in Libraries. The argument: with the advent of massively searchable digital corpora, the uses and advantages of “nearness” in Libraries will change.

Dr. Amanda French, Center for History and New Media at George Mason, will deliver the second keynote, “On Projects, and THATCamp”
Abstract: Since its start in 2008, THATCamp, The Humanities and Technology Camp, has seen more than 170 events held or planned worldwide and has provided digital training and professional development to more than 6000 people, most of them humanities scholars, students, or professionals. Whether we consider it one project or many, THATCamp has become an essential feature of the digital humanities landscape, and it is time for some perspective on it.

While there is no charge to attend, one must register. For more details and to register to attend, please visit the ODH webpage.

Virtual Paul’s Cross Project website is now available for exploration!

May 8, 2013

st-paul

About a year ago, EMOB devoted a post to several NEH-funded digital projects. John N. Wall, Project Director and Professor of English Literature at NC State University, has let us know that the Virtual Paul’s Cross Project website is now available for exploration at http://vpcp.chass.ncsu.edu. We provide below the press release announcing its availability and invite EMOB readers to explore and comment.

The Virtual Paul’s Cross Project uses visual and acoustic modeling technology to recreate the experience of John Donne’s Paul’s Cross sermon for November 5th, 1622. The goal of this project is to integrate what we know, or can surmise, about the look and sound of this space, destroyed by the Great Fire of London in 1666, and about the course of activities as they unfolded on the occasion of a Paul’s Cross sermon, so that we may experience a major public event of early modern London as it unfolded in real time and in the context of its original surroundings.

The Virtual Paul’s Cross Project has been supported by a Digital Start-Up Grant from the National Endowment for the Humanities.

The Virtual Paul’s Cross Project has sought the highest degree of accuracy in this recreation. To do so, it combines visual imagery from the 16th and 17th centuries with measurements of these buildings made during archaeological surveys of their foundations, still in the ground in today’s London. The visual presentation also integrates into the appearance of the visual model the look of a November day in London, with overcast skies and an atmosphere thick with smoke. The acoustic simulation recreates the acoustic properties of Paul’s Churchyard, incorporating information about the dispersive, absorptive or reflective qualities of the buildings and the spaces between them.

This website allows us to explore the northeast corner of Paul’s Churchyard, outside St Paul’s Cathedral, in London, on November 5th, 1622, and to hear John Donne’s sermon for Gunpowder Day, all two hours of it, in the space of its original delivery and in the context of church bells and the random ambient noises of dogs, birds, horses, and crowds of up to 5,000 people.
There is a Concise Guide to the whole site here.

In keeping with the desire for authenticity, the text of Donne’s sermon was taken from a manuscript prepared within days of the sermon’s original delivery that contains corrections in Donne’s own handwriting. It was recorded by a professional actor using an original pronunciation script and interpreting contemporary accounts of Donne’s preaching style.

For John Donne’s Paul’s Cross sermon for November 5th, 1622 (in 15-minute segments), as heard from 2 different positions in the Churchyard, go here.

On the website, the user can learn how the visual and acoustic models were created and explore the political and social background of Donne’s sermon. In addition to the complete recordings of Donne’s Gunpowder Day sermon, one can also explore the question of audibility of the unamplified human voice in Paul’s Churchyard by sampling excerpts from the sermon as heard from eight different locations across the Churchyard and in the presence of four different sizes of crowd.

For excerpts of the sermon from eight different locations and in the presence of different sizes of crowd go here.

The website also houses an archive of materials that contributed to the recreation, including visual records of the buildings, high resolution files of the manuscript and first printed versions of Donne’s sermon for Gunpowder Day 1622, and contemporary accounts of Donne’s preaching style. In addition, the website includes an acoustic analysis of the Churchyard, discussion of the challenges of interpreting historic depictions of the Cathedral and its environs, and a review of the liturgical context of outdoor preaching in the early modern age.

To see the visual model in detail on a fly around video go here. This is especially dramatic if viewed in HD video and at Full Screen display.
This Project is the work of an international team of scholars, engineers, actors, and linguists. In addition to the Project Director, they include David Hill, Associate Professor of Architecture at NC State University; Joshua Stephens, Jordan Grey, Chelsea Sacks, and Craig Johnson, graduate students in architecture at NC State University; John Schofield, Archaeologist at St Paul’s Cathedral and author of St Paul’s Cathedral Before Wren (2011); David Crystal, linguist; Ben Crystal, actor; Ben Markham and Matthew Azevedo, acoustic engineers with Acentech, Inc; and members of the faculty in linguistics and their graduate students at NC State University, especially professors Walt Wolfram, Erik Thomas, Robin Dodsworth, and Jeff Mielke.

Wall’s team is now planning a second stage of this Project, with the goal of completing the visual model of Paul’s Churchyard, including a complete model of St Paul’s Cathedral as it looked in the early 1620’s, during John Donne’s tenure as Dean of the cathedral. This visual model will be the basis for an acoustic model of the cathedral’s interior, especially the Choir, which will be the site for restaging a full day of worship services, including Bible readings, prayers, liturgies from the Book of Common Prayer, sermons, and music composed by the professional musicians on the cathedral’s staff for performance by the cathedral’s organist and its choir of men and boys. They will be competing for our attention, as they did in the 1620’s, with the noise of crowds who gathered in the cathedral’s nave, known as Paul’s Walk, to see and be seen and to exchange the latest gossip of the day.

Folger Institute “Early Modern Digital Agendas”

November 29, 2012

The following announcement, from Owen Williams, Assistant Director of the Folger Institute, will be of interest to readers:

In July 2013, the Folger Institute will offer “Early Modern Digital Agendas” under the direction of Jonathan Hope, Professor of Literary Linguistics at the University of Strathclyde. It is an NEH-funded, three-week institute that will explore the robust set of digital tools with period-specific challenges and limitations that scholars of early modern English now have at hand. “Early Modern Digital Agendas” will create a forum in which twenty faculty participants can historicize, theorize, and critically evaluate current and future digital approaches to early modern literary studies—from Early English Books Online-Text Creation Partnership (EEBO-TCP) to advanced corpus linguistics, semantic searching, and visualization theory—with discussion growing out of, and feeding back into, their own projects (current and envisaged). With the guidance of expert visiting faculty, attention will be paid to the ways new technologies are shaping the very nature of early modern research and the means by which scholars interpret texts, teach their students, and present their findings to other scholars.

This institute is supported by an Institutes for Advanced Topics in the Digital Humanities grant from the National Endowment for the Humanities’ Office of Digital Humanities. Please visit http://emdigitalagendas.folger.edu/ for more details.

Owen writes that he will be happy to answer questions pertaining to this interesting new project.

T-PEN: A New Tool for Transcription of Digitized Manuscripts

October 22, 2012

One of the exciting turn of events for scholars has been the growing number of unpublished, hand-written documents now available on the world wide web. Textual scholars no longer have to travel to distant countries for view the essential manuscript(s) for their research. Instead, they can now sit themselves down in front of their laptop and display each successive page. This has moved many sources that were once difficult to access into the “completely accessible” category.

But does that make them usable?  Despite the desire to make many manuscript collection freely accessible, many digital repositories use “tiled-based” viewers in order to protect unauthorized copying of the collection. This is completely understandable, but those viewers sometimes place limits on how a digital surrogate can be viewed. They can even make it difficult for scholars to extract what they often want most: a transcription of the manuscript’s content. Moreover, the current practice of transcribing from digitized pages can easily permit mistakes to occur. Transcribers currently move from the image to a word processing application in another display window (either on the same screen or on a different monitor). That process can easily mimic the same mistakes that the original scribe could make: haplography (omission of content between similar or identical words; “saut du même au meme”), dittography (repetition of letters or syllables), duplication or omission (of letters, words, or lines), often caused by homoearcton and homoeoteleuton (similar beginnings and endings of words), and transpositions. Could it then be possible to make these digital manuscripts both accessible and highly usable?

T-PEN (Transcription for Paleographical and Editorial Notation) seeks to address both the accessibility and usability of digital repositories. Developed by the Center for Digital Theology of Saint Louis University, in collaboration with the Carolingian Canon Law Project of the University of Kentucky, this new digital tool is a sophisticated web-based application that assists scholars in transcribing these manuscripts. To reduce the likelihood of transcription errors, we took advantage of digital technology to place both the transcription and the exemplar in a manner that minimized the visual movement between the two as much as possible. We accomplished this with a simple but novel visualization of the lines of script in the exemplar, which we integrated with interactive transcription spaces. To build the tool, we developed an algorithm for “parsing” the lines of script in an image, and a data model that connected the image delivery of manuscript repositories with the actions of transcribers.

But we wanted T-PEN to offer more than just a means to ensure good transcription. We had, in fact,  three goals in mind:

  1. To build a tool useful for any kind of scholar, from the digital Luddite to those obsessed with text encoding;
  2. To provide as many tools as possible to enhance the transcription process;
  3. To help scholars make their transcriptions interoperable so that those transcriptions would never be locked into the world of T-PEN alone.

After two years of design, development, and intensive testing this tool is now available to the wider public. It was built in the first instance for those working with pre-modern manuscripts, but there is nothing in its design that would prevent early modern scholars from exploiting T-PEN for their purposes. T-PEN is a complex application and to explain every function would take several posts. Instead, I want to provide a brief overview of how someone can set up a transcription project, how they can use T-PEN to produce high-quality work and finally how to get transcriptions out of T-PEN and into other applications or contexts.

Choosing your Manuscript

T-PEN is meant to act as a nexus between digital repositories and the scholar. To date, we have negotiated access to over 3,000 European manuscripts and we are working on further agreements to expand that list. Our aim is to have a minimum of 10,000 pre-modern European manuscripts available for transcription. Even with that number, we will never be able to satisfy all potential users. We therefore enabled private uploads to extend T-PEN’s usability. Many scholars have obtained digital images of a manuscript and they have permission to make use of them for research purposes. Private uploads to T-PEN are an extension of that “fair use.”  Users zip the JPG images into a single file and then upload them to T-PEN. These type of projects can only add five additional collaborators (see project management, below), and they can never become public projects. Currently T-PEN can support around 300 private projects, and we are expanding our storage capacity for more.

T-PEN's Catalog of Available Manuscripts

Transcribing your Manuscript

Once you select your manuscript you can immediately begin your transcription work. T-PEN does not store any permanent copies of the page images, so each time you request to see a page T-PEN loads the image from the originating repository. If you have never transcribed the page before, T-PEN takes you to the line parsing interface. This adds a little time to the image loading as T-PEN parses the image in real time. When it finishes, you will see a page that looks like this:

T-PEN's Line Parsing Interface

T-PEN attempts to identify the location of each line on the page and then uses alternating colors to display those coordinates. As you can see, we make no claim of absolute perfection. We worked on this algorithm for  almost two and half years and after extensive testing, we’ve been able to promise, on average, an 85% success rate. There are a number of factors that prohibit complete accuracy and so we offer a way for the transcriber to introduce corrections herself. You can add, delete or re-size columns; and insert or merge lines as well. You can even adjust the width of individual lines if they vary in length. You can even combine a number of lines if you want to have them grouped together for your  transcription. Sometimes, manuscripts don’t merge well in our modern, rectilinear world: many handwritten texts were written at an angle or were so tightly bound that the page could not be photographed as flat. T-PEN ultimately doesn’t care: what really matters for connecting transcription to a set of coordinates on a digital image. What really matters is that the left side of the line box aligns with the written text. That’s the anchor.

When you are satisfied with the line parsing, you can start transcribing. The transcription interface looks like this:

T-PEN Transcription User Interface

This interface allows you to transcribe line by line, with the current line surrounded by a red box. There are some basic features to note. First, as you transcribe the previous line is noted above because so often sentence units are split across lines. Transcription input is stored in Unicode and T-PEN will take whatever language set the user has enabled his computer to type. If there are special characters in the manuscript, the transcriber can insert them either by clicking on the special character button (the first ten are hot-keyed to CTRL+1 through 0).

Second, users can encode their transcription as they go. On this aspect, T-PEN is both innovative and provocative. Many scholarly projects that include text encoding often adopt a three-step process: the scholar transcribes the text and then hands it to support staff to complete the encoding, which is finally vetted by the scholar. However, there are many times in which semantic encoding of transcriptions has to include how the text is presented on the page. T-PEN innovatively allows scholars to integrate transcription (with the manuscript wholly in view) and encoding into one step. Often the best encoder is the transcriber herself. That innovation comes with a provocative concept, however. In digital humanities where TEI is the reigning orthodoxy, T-PEN is at least heterodox if not openly heretical. T-PEN’s data model does not expect,  nor require, a transcription to be encoded much less utilize TEI as the basis of structured text. Instead, T-PEN treats all XML elements as simply part of the character stream. T-PEN can support transcribers who don’t want to encode at all as well as those who are wholly committed to the world of TEI. For those who want to encode, a schema can be linked to a project to produce a set of XML buttons that can be used in the transcription interface.

Project Management

For those who simply want to start transcribing, project management will not be that important. For those who envisage a more sustained project (and perhaps a collaborative one at that), it will be vital. There are a number of components in managing a T-PEN project, but here I want to highlight two of them.

Collaboration. Like most digital tools, T-PEN allows you to invite collaborators to join your project. All members of a project have to be registered on T-PEN (but that’s free and requires only providing your full name and an email address). Managing collaboration has three features, of which only a few projects will use all three. There is first adding and deleting project members. Any member of a project can see who is also a member, but only the project leader can add or delete members. A project leader can even have T-PEN send an invitation to a non-T-PEN person and invite them to join (and once they do, they automatically become part of that project).

Collaboration in Project Management

Second, there is a project log to inspect. This log records any activity that changes the content or parameters of the project. This can be particularly helpful when tracking down how a transcription has changed in a shared project (and a user can display the history of each line in the Trasnscription UI). Finally, projects can make use of T-PEN’s switchboard feature. This is for transcription projects that may be part of a larger project, and where the transcriptions will be aggregated in another digital environment. Switchboard does two things for a project: (1) it allows different projects to share the same XML schema so that all transcriptions will conform to the larger project’s standards; and (2) it will expose the transcription through a web service to permit easy export to the larger project.

Project Options. The two more important options are button management and setting the transcription tools. As seen in the screen shot of the transcription interface, users can use buttons to insert both XML elements and special characters. Those buttons are created and modified as part of the project options. If there is an XML schema for the project, a project leader can link it to the project. Then in button management, the elements in that schema populate the XML button list. The button populator does not discern between metadata elements and elements found in the body of an encoding schema. Users then have to modify the button list to cull the elements that won’t be used during transcription. There’s an additional advantage to editing that list: each button can gain a more readable title. This can be helpful if the encoding schema exploits the varying use of the <seg>  or the <div> elements in TEI. When the possible deployment of the tag might be unclear to those with less experience with TEI, a more straightforward title can become a better guide to its use.

Special characters allow the user to identify characters in the UTF-8 system which may not be represented on a standard keyboard. These can be created by entering the correct Unicode value for the character. The first 10 characters are mapped to hotkeys CTRL+1 through 0.

Finally, the set of tools that are available on the transcription interface are set in project options. T-PEN has thirteen tools built-in and most of them were included to assist transcribers of pre-modern manuscripts. Some will be helpful to editors of modern texts. If those tools are unhelpful, then the user can expand that list of tools: all that is needed a name of the tool and its URL. Once attached to the project, the user will be able to access that tool in the transcription interface.

Getting your Transcription out of T-PEN

Digital tools often fall into one of two categories. “Thinking” tools are ones that allow users to manipulate and process datasets in order to test a certain idea or to visualize an abstract concept. They can also allow the user to annotate a resource as a way of processing the scholar’s conception of the object’s meaning or the hermeneutical framework it may require. These tools are invaluable, but they do not easily produce results that can be integrated into a print or digital publication. The second type is what I call the production tool. With these applications, the final objective is to produce something that can be integrated in other contexts. T-PEN falls firmly into this second category—although it has its own annotation tool with which a user can record observations about each manuscript page (and it is compliant with the W3C standard, the Open Annotation Collaboration). Scholars transcribe normally one of three reasons; to create a scholarly edition; to place those transcriptions in footnotes or in the appendices of a monograph; or to integrate an encoded text into a larger resource.

T-PEN supports four basic export formats: XML/plaintext, where the user can filter out one or more XML tags; PDF; RTF which is compatible with most word processors; and finally, basic HTML. For the first one, if the user has attached a header to the project, that header can be included in the export. There is an important caveat here:  T-PEN was not designed to be an XML editor. We do offer a basic, well-formedness check (which stops at the first error), but T-PEN does not offer full validation services. Most scholars who encode with T-PEN export their transcriptions to an XML editor for full validation of the file. The last three export formats include some simple transformation for text decoration (italics, bold, etc.). Users can also identify the whole transcription or specify a range based on the pagination (or foliation) of the manuscript.

T-PEN's Export Options

This post only covers the basics of T-PEN. There are more features available to the user. There is a demonstration video on YouTube  where you can walk with one of T-PEN’s research fellows as she begins a transcription project.  T-PEN is freely available, thanks to a major investment from the Andrew W. Mellon Foundation and a Level 2 Start-up grant from the National Endowment for the Humanities. So go to t-pen.org and register for an account.

NEH Digital Humanities Startup Grants: Funding the Future

May 13, 2012

Adapting the “‘high risk/’high reward'” model often employed in funding the sciences, NEH Digital Humanities Startup Grants reward originality. To be considered, the proposal must entail an “innovative approach, method, tool, or idea that has not been used before in the humanities” (Digital Humanities Startup Grants Guidelines, p. 2). These Startup Grants fund two levels of projects. As expected, the Level I award supports projects at the embryonic stage of development, while the Level II award funds projects that are more advanced and nearing the implantation stage. The Grant Guidelines provide full details.

In late March the NEH Office of Digital Humanities announced the most recent projects to be awarded a NEH DH Startup Grant. As in the past the projects receiving funding were diverse and promising: a workshop to assist university presses in publishing digitally-born, scholarly monographs; tools to convert text to braille for the visually impaired; improvements to OCR correction technology; software adapted to enable better identification and cataloguing of various features within illustrations in the English Broadside Ballad Archive, a prototype application to promote analysis of visual features such as typeface, margins, indentations of printed books, to name a few.

While these grant-winning projects all carry brief descriptions, they are still in their gestation or early implementation phase. A better sense of what this funding yields can be gleaned from the NEH “Videos of 2011 Digital Humanities Start-Up Grantees” as well as the other online material that has emerged in connection with these projects. The following showcases a few of the 2011 DH Startup grantees most likely to interest EMOB readers.

As the project’s title “New Methods of Documenting the Past: Recreating Public Preaching at Paul’s Cross, London, in the Post-Reformation Period” suggests, this project seeks to reproduce the seventeenth-century experience of hearing a sermon in Paul’s Cross. To do so, it employs architectural modeling software and acoustic simulation software to re-create conditions that will mimic those of a time in which unamplified public speaking competed with the sounds of urban life. One of the questions this simulation aims to answer is whether the printing of many Paul’s Cross sermon points to their popularity among those who gathered to hear them or, instead, to the need to distribute printed versions because their original oral delivery was inaudible save for a few. English professor and Project Director John Wall’s The Virtual Paul’s Cross website details the project’s objectives and its progress. The site also contains a blog that offers occasional updates . Here, for example, it offers various views of the draft model created by Josh Stephens using Sketch-Up such as this perspective of the Churchyard with the east side of the Cathedral:


From John Wall’s The Virtual Paul’s Cross Project blog, May 15, 2012

Preliminary results from the acoustic simulation will be available this month.

Another project, the University of South Carolina Research Foundation’s “History Simulation for Teaching Early Modern British History” integrates gaming with the humanities. The interactive “Desperate Fishwives” game, first conceived by Ruth McClelland-Nugent, (History, Augusta State University) who serves as a consultant to the project, enables student to experience life in a seventeenth-century by assuming the persona of a villager who must adhere to the conventions and social rules of early modern England or face the consequences. Play is designed to take place in hour segments, so the game can be played over several class periods or assigned for homework. After the completion of play, students write a narrative of their experiences, an assignment aimed at teaching historiography. An article appearing in the Columbia, SC Free Times, “Desperate Fishwives Players Navigate 17th Century English Village Life,” offers an enthusiastic account of this teaching tool. In addition to producing this specific game, the project also hopes to provide tools and documentation that would help humanities scholars create educational simulation games suitable for their particular discipline.

In comments to an earlier EMOB post, we referenced a project out of the University of Washington, “The Svoboda Diaries Project: From Digital Text to ‘New Book'”. Yet its innovativeness warrants mentioning it again here. The project features a 19th-century travel diary written by a European but in Arabic. The following description, taken from the project’s successful 2011 NEH grant abstract, offers a succinct overview of this rich project:

Based on its work with a large corpus of personal diaries from 19th century Iraq, the project will develop and test a process for the simultaneous web and print-on-demand publication of texts and transcriptions of original manuscripts with annotation, indexing, translation, images, etc. in complex scripts [l-r and r-l, English and Arabic, in our case]. This process, involves a re-thinking of “the book” that will use digital and new-media resources to combine the functions of traditional print publication, including editing, book design, printing, advertising, and distribution with web-based publication and produce, in house, a low-cost printed book supported by a wide array of web-based materials. Moreover, the “book” (both web and print) will flow directly from a richly tagged TEI-compatible XML text prepared for scholarly investigation, and be capable of continuous regeneration from up-dated and enriched versions. Funded Projects Query Form

For EMOB readers, the project’s interest may well stem from its work in creating a “publishable book on its website that anyone can produce using a machine like the Espresso Book Machine (see an earlier EMOB post. An equally fascinating feature of this project is its dual display of English and Arabic text as this sample page illustrates.

Designed especially for literary analysis, University of California Berkeley’s WordSeer: A Text Analysis Tool for Examining Stylistic Similarities in Narrative Collections uses grammatical structure and national language patterns; its functions include visualization tools. In addition to the NEH lightening round video, other videos and blogs detail ways that this tool has been used to ask questions of Shakespeare’s works as well as African American slave narratives.
In WordSeer demos: Men and Women in Shakespeare, the tool is employed to compare analytically the ways in which men and women are depicted in various circurmstances. The video “How Natural Language Processing is Changing Research” provides a more extended look at WordSeer’s usefulness for analyzing slave narratives, but its purpose is also to underscore how such a tool can benefit humanities scholars. In this video the discussion veers toward presenting reading as a chore from which humanities scholars seek relief. On that note, a student in Dr. Michael Ullyot’s undergraduate ENG 203 course, “Hamlet in the Humanities Lab” at the University of Calgary offers some pertinent comments. In her penultimate blog post for the course, Stephanie Vandework devotes a section to “The Pros and Cons of Exploratory Analysis” and examines more closely the claims in the WordSeer Shakespeare demo, finding some to suffer from overgeneralization. (For a view of the course from the instructor’s perspective, see Dr. Ullyot’s presentation, Teaching Hamlet in the Humanities Lab, for the Renaissance Society of America conference this past March 2012.)

These four projects represent just a glimpse of the many fascinating undertakings featured in the NEH 2011 Lightening Round Videos. That some projects such as WordSeer are already being incorporated into courses speaks to the rapidity with which research and pedagogical practices are changing.

Exploring reception history in Women Writers Online

September 16, 2010

We’re delighted to have been invited to contribute to the EMOB blog. The Brown University Women Writers Project has a strong interest in the issues raised here and we hope to learn a great deal from EMOB’s readers about how scholars work with digital collections.

In this first posting, we’d like to announce an upcoming project for which we just received funding, and solicit the attention and thoughts of this community as we start planning. Once the project gets started, we’ll have more concrete things to seek feedback on and also opportunities for contribution.

Many readers of this blog will already have seen the announcement of the WWP’s most recent NEH grant, “Cultures of Reception: Transatlantic Readership and the Construction of Women’s Literary History”. This three-year project will begin in January 2011, and its overall goal is to gather and study materials that can help us grasp the reception history for texts in the WWO collection. We’ll be focusing on published reviews from the late 18th and early 19th centuries, but also including other sources such as anthologies, early literary histories, and manuscript materials like diaries, letters, and commonplace books.

Our plan is to digitize reviews and contemporary critical responses to women’s writing, in a way that enables us to mark explicitly for study a set of key points for analysis: for instance, the author of the review, the text being reviewed, the evaluative language used, any other texts with which the reviewed text is being compared, the terms of the comparison, as well as information needed to enable us to trace geographical and temporal connections. These source materials will be published through an interface that allows readers of WWO to examine the reception history of a given text (or textual exchange), and also to get a broader view of the terms in which women’s writing was being read and evaluated, both publicly and privately.

There will be opportunities for participation of various kinds, including contributions of contemporary reader responses to WWP texts, and also input on the design of the interface for working with the source materials. We will also be very glad to hear from anyone who is working directly on reception history, who might be interested in working with us more closely (for instance, using the collection to prepare an article that we might publish with WWO). At the outset, though, we also have a few issues on which we’d be very glad of people’s thoughts:

  1. What does one need to know about reading and reviewing practices in order to make a meaningful study of reception history? What are the potential blind spots in this project?
  2. What opportunities for new questions and approaches might a collection like this open up? For instance, how might geographical information affect our understanding of readership and reception? What kinds of interface tools would best facilitate working with these materials?
  3. What other kinds of research questions might arise out of these materials? Are there larger purposes we should be bearing in mind for this data that might affect how much detail we capture, etc.?

We look forward to following the discussion and learning more!

best wishes,

Julia Flanders
John Melson

Women Writers Project, Brown University Center for Digital Scholarship

http://www.wwp.brown.edu