Readers will be interested in Julia Flanders’ announcement that Women Writer’s Online will be free and open to the public during March. WWO can be accessed by clicking here or by going to http://www.wwp.brown.edu.
Archive for the ‘Uncategorized’ Category
The results of the Fall 2013 Gale Cengage SUNY-wide essay competition are in. Three awards were given: 1 for the best graduate essay ($500); 1 for the best undergraduate essay using ECCO ($250); and 1 for the best undergraduate essay using NCCO ($250). Essays were read by an independent judge.
The winners are
Erin Annis, “The Scotch Intruders”: The Political Context for Scottish Integration into the Eighteenth-Century British Empire
HIST 600 Research Seminar, SUNY Binghamton (Dr. Douglas Bradburn)
Stephanie Boutin, “True Victorian Womanhood and Manhood”
ENG 316 Victorian Nonfiction & Poetry, SUNY Plattsburgh (Dr. Genie Babb)
Christy Harasimowicz, “Samuel Richardson’s Pamela; or, Virtue Rewarded: Justification of Masculine Activity and the Avenue to Virtue”
ELIT 287 From Romance to Gothic, SUNY Oneonta (Dr. Jonathan Sadow)
Congratulations to all who submitted essays.
Gale Cengage gave SUNY schools a great opportunity this semester by offering free trial access to ECCO, Burney, and NCCO. I, for one, learned a lot from working with undergraduates in my Gothic Novels course as they searched ECCO for relevant material for their final research papers. Those papers were mixed, with some outstanding essays and some less successful attempts. I summarize my experience below:
- ECCO must be part of a strong digital collection in order to be fully usefuL. Spotty digital holdings make using ECCO difficult. For instance, without a subscription to the Oxford Dictionary of National Biography, new users find it difficult both to identify the author of a lesser known work and to assess that work’s historical or literary significance.
- Using ECCO requires both competency with secondary sources and access to those sources. Though some students used many secondary sources, even ordering books on interlibrary loan, many were more timid about using JSTOR and Project Muse than I anticipated. Now that we purchase almost no books, galvanizing interest in scholarly books feels more difficult. Am I imagining this?
- Using ECCO was great for new critical readings. My students wrote lively and insightful papers using the search function to demonstrate the significance of words, phrases, or images in a given text. The search function, however imperfect, helped students “read” more attentively.
- Using ECCO posed significant challenges for historical readings–ironically the very readings that would theoretically most benefit from such a resource. I prepared handouts, explained key historical moments and figures, and discussed competing approaches to these novels, but finally students required written accounts of contexts that they could study on their own. Printing excerpts from secondary sources, particularly secondary sources that provided differing points of view helped. The take away: students using ECCO would benefit from a textbook/anthology that clustered primary and secondary sources and provided suggestions for further reading in ECCO. This seems like a productive printing possibility.
Some found ECCO a chore; others liked it; some quietly noted that it grew on them. All of them acquired an appreciation for the vastness and richness of the archive at their fingertips. Most felt students should have access to it. Using ECCO stretched us all as readers and interpreters of eighteenth-century texts, never something to be dismissed.
Our SUNY experiment using ECCO (and, in other courses, NCCO) has begun. The initial difficulty was getting students to use ECCO. To that end, I designed the introductory exercise listed below, which resulted in thoughtful papers that often used proximity and wildcard searches. Best of all, not only do students seem more comfortable using ECCO after completing this exercise, they also are more attuned to Radcliffe’s craft.
The assignment is designed for an undergraduate class on the Eighteenth-Century Gothic Novel.
I would love to hear about other successful exercises or assignments using ECCO, NCCO, or Burney, especially exercises asking students to study historical contexts.
Word Searching in ECCO (Eighteenth-Century Collections Online)
Due: Monday, 7 October, in class.
Length: 1 page, typed and double-spaced
- Go to the Feinberg Library home page
- Click on “Find Articles”
- Click on “Databases by Subject”
- Click on “English/Literature”
- Click on “Eighteenth-Century Collections Online”
- Do a title search for “Romance of the Forest” with “1792” as the date [it was published in 1791, but the earliest edition ECCO has is the 2nd edition, published in 1792].
- Note that each of its three volumes comes up as a different book; each volume will need to be searched for the word you select.
- Select a word that seems important to the novel: “forest,” “romance,” “labyrinth,” “asylum,” “tears,” “door,” “hidden,” “fear,” “beauty,”
“prayer,” “road,” “convent,” “reason,” “rational,” “imagination,” and so forth.
- Do a word search for every occurrence of that word in each volume. Remember that words with “s” might need false searches: “case,” for example, requires a search for “cafe.” Consider synonyms. Consider alternate spellings of words.
- When necessary, look up the eighteenth-century meaning of words in the Oxford English Dictionary, also available on the Feinberg Library English Department web site.
- Write a brief (1 page) account of the role of that word in Radcliffe’s narrative, in her construction of character, in her construction of tone, or in other key aspects of her artistry.
* A search for “poet*” searches for words with “poet” as the root: “poet,” “poetic,” “poetess,” “poetical,” “poets,” etc.
? A search for “wom?n” calls up “women” and “woman”
! A search for “nun!” calls up “nun,” “nuns,” “nunn,” “nune”
A search for “ladies n6 asylum” calls up texts with “ladies” and “asylum” within 6 words of one another.
A search for “ladies w6 asylum” calls up texts with “ladies” appearing within 6 words before “asylum”
An Information Literacy Pre- and Post-Assessment for a Research-Intensive Undergraduate Class Using Primary SourcesAugust 21, 2013
This is Dave Mazella, posting a follow-up to Anna and Eleanor’s previous discussion of teaching with ECCO. As we talked about pedagogical strategies for including ECCO in eighteenth-century courses, the question arose of how one might assess these kinds of activities and their impact on student learning.
Julie Grob, a UH special collections librarian and a collaborator of mine, has generously agreed to share this IL pre-course assessment that she designed for a research-intensive course we developed together. This kind of assessment, taken at the beginning and end of the semester, can help you assess the impact of a semester’s work in primary sources. These questions were administered through surveymonkey.
The background to the course can be found in this co-written article we published in portal, a scholarly library journal available on JSTOR and Project MUSE. Julie developed these questions as we both worked through the ACRL Research Competency Outlines, which were very helpful for designing both assignments and assessments.
- Have you previously taken ENGL 3301, Introduction to Literary Studies? [this is my Intro the Major course, which includes some work in Spec Collections]
- Have you ever visited Special Collections, either with a class or on your own? If the former, for which class?
- materials from the 18th century only
- the first sources you should look at when doing your research
- sources that contain contemporary accounts of an event, written by someone who experienced or witnessed that event
- any sources held by a library, regardless of format
3. From the answers below, which is the best definition of secondary sources?
- any materials held by a library that are not rare
- sources that are not relevant to your particular research
- sources that interpret an event, written by someone at least one step removed from that event
- any materials that were published after the 18th century
4. What kinds of materials are found in the UH Libraries’ Special Collections? (Please check any that apply).
- old books
- new books
5. How would you find out if a book about Benjamin Franklin is located in Special Collections?
- Come to Special Collections and look at the paper card catalog
- Come to Special Collections and wander through the book stacks
- Search for books about Benjamin Franklin in the library catalog, then “limit” your search to Special Collections
- Search for Benjamin Franklin under “archival finding aids” on the Special Collections website
6. Which of the following are common features of an 18th century book? (Select four).
- printed on vellum (animal skin)
- printed on paper
- bound in leather
- bound in colorful bookcloth
- illustrated with engravings
- illustrated with photographs
- words have a “long s”words have a “double y”
7. What kind of source would be most important for a scholar to consult if he or she wants to do original research (that is, research that creates new knowledge in their field)?
- an electronic source
- a primary source
- a secondary source
8. Which of the following databases would be most useful for finding articles about literature? (Select three).
- Philosopher’s Index
- Project Muse
9. If you search one of the Library’s electronic databases using a keyword and get back 500 hits, how might you most effectively change your search to get back a more manageable number of results?
- use a totally different keyword
- add a second keyword
- do a keyword search using Google instead
10. Where are you most likely to find accurate information about a famous person from the 18th century?
- Wikipedia (web site)
- MLA (database)
- Dictionary of National Biography (database)
We used this as part of our documentation of student learning for the SACS QEP, which helped fund the acquisition of some special collections material for the course.
In honor of Women’s History month, Cambridge University Press’s Orlando: Women’s Writings in the British Isles from the Beginnings to the Present is offering free access during March. Orlando “provides entries on authors’ lives and writing careers, contextual material, timelines, sets of internal links, and bibliographies. Interacting with these materials creates a dynamic inquiry from any number of perspectives into centuries of women’s writing.”
To gain access, the login is womenshistory2013, and the password is Orlando.
EEBO Interactions, the web site that fused social networking and digital bibliography, is shutting down at the end of March 2013.
ProQuest’s decision to decommission EEBO Interactions should come as no surprise. If traffic indicates success, the site received too little to certify its academic or commercial value. The small core of contributors who worked brilliantly and doggedly to improve bibliographic entries was not enough to prove that value. Why should it be? In a world where crowd-sourcing promises instant and free correction, EEBO Interactions‘ small stream of corrections proved too little and too slow.
Nevertheless, the decision to shut down EEBO Interactions is a disappointment because it ends a promising and visionary venture on ProQuest’s part. Proquest accomplished at least two great things. First, it offered a rare joint venture uniting academic and commercial worlds. Second, it conjured up the first bibliography to offer relational cataloging. If this iteration of that vision did not quite take off, it is to be hoped that later iterations will. Traffic may be one indication of success, but vision is another.
As an editor for EEBO Interactions, I would like to thank EI‘s contributors. They are a special group of readers, experts willing to put time into a promising experiment. I have told Stephen Brooks that I would ask emob readers what EEBO Interactions could have done to encourage traffic or otherwise improve. What might a second iteration include or not include? Is an unedited, crowd-sourced version of EEBO that runs parallel to EEBO the way to go for such interactions? Or is an ESTC-led editorial board the way? An option in between these two poles?
One note of caution. Anyone interested in preserving information recorded on EEBO Interactions should download material before the end of the month. ProQuest will save material contributed to EI in some form, but it will be difficult to access.
The University of California at Santa Barbara has created a free digital ballad collection called The English Broadside Ballad Archive (EBBA), which provides access to more than 8,000 seventeenth-century ballads. The collection includes ballads from the Pepys Collection, the Roxburgh Collection, the Euing Collection, and the Huntington Library. EBBA is directed by Patricia Fumerton at UCSB. This project was supported by the N.E.H.
Individual entries provide links to sheet facsimiles, facsimile transcriptions, and often recordings. These features facilitate introducing students both to ballads’ visual details–ornaments, woodcuts, columned verse–and to their tunes.
Cataloging is full and includes the following:
EBBA ID: An internal identifier. Each individual ballad in the archive has a unique EBBA ID.
Title: A diplomatic transcription of the ballad title as it appears on the ballad sheet. The title consists of all ballad text before the first lines of the ballad, including verse headers but excluding text recorded elsewhere under other catalogue headings (such as the license or author, date, publisher and printer imprints).
Date Published: The year—or, in most cases, range of years—during which EBBA believes the ballad to have been published. See Dates.
Author: The recognized author of the ballad in cases where an indication of authorship has been printed on the ballad or, in the case of Pepys ballads, when Weinstein has identified an author from external sources (e.g., Wing, Rollins).
Standard Tune: The standardized name for the melody (according to Claude M. Simpson or other reliable sources). Clicking the standard tune name will return all ballads with the same melody, including alternate tune titles.
Imprint: A diplomatic transcription of the printing, publishing, and/or location information as it appears on the ballad sheet.
License: A diplomatic transcription of the licensing or permission information as printed on the ballad.
Collection: The name of the collection to which the ballad belongs. In cases where the ballad is not part of a named collection, the name of the holding library plus “miscellaneous” will appear. For example, Huntington Library ballads that are not part of a collection are grouped as “HEH Miscellaneous.”
Sheet/Page: For ballads that are collected as independent sheets, the citation page displays the word “Sheet” and lists the sheet number given to it by its holding institution (usually part of its shelfmark). For ballads bound in a book, the citation page displays the word “Page” and lists the page number within the bound volume.
Location: The name of the holding institution.
Shelfmark: The shelfmark assigned by the holding institution.
ESTC ID: The Citation Number for the English Short Title Catalogue (ESTC). Use this number to find the full ESTC citation for any given ballad at http://estc.bl.uk/.
Keyword Categories: The keywords from EBBA’s standardized keyword list that relate to the ballad’s theme and content.
Notes: Clarify potential areas of confusion for users, such as ballads that have print on both sides of a sheet.
MARC Record: A link to our MARC-XML records
Additional Information: Information specific to each part of the ballad.
Title: Separate titles for multi-part ballads.
Tune Imprint: Tune title(s) as printed.
First Lines: A diplomatic transcription of the first two lines of the ballad text proper, below any heading information included in the title or elsewhere under other catalogue headings.
Refrain: Repeated lines at the end of or within ballad stanzas.
Condition: Description of ballad sheet damage and the current state of the sheet. (This information is from Weinstein and is currently for the Pepys collection only.)
Ornament: A list of decorations made of cast metal that appear on the ballad. Frequently used to fill empty spaces in the forme and/or to delimit parts of the ballad text, these ornaments include vertical rules, horizontal rules, and cast fleurons. (This information is from Weinstein and is currently for the Pepys collection only.)
Ballad scholars working with EEBO or ECCO will be familiar with the difficulty of finding ballads, making English Broadside Ballad Archive and Bodleian Library Broadside Ballads necessary.
Together with new printed resources, such as Patricia Fumerton and Anita Guerrini’s Ballads and Broadsides in Britain, 1500-1800 (Ashgate 2010) and Angela McShane’s Political Broadside Ballads of Seventeenth-Century England: A Critical Bibliography (Pickering & Chatto 2011), these digital resources provide a robust and growing archive for the systematic study of a format whose transiency may have discouraged such studies in the past.
The following announcement, from Owen Williams, Assistant Director of the Folger Institute, will be of interest to readers:
In July 2013, the Folger Institute will offer “Early Modern Digital Agendas” under the direction of Jonathan Hope, Professor of Literary Linguistics at the University of Strathclyde. It is an NEH-funded, three-week institute that will explore the robust set of digital tools with period-specific challenges and limitations that scholars of early modern English now have at hand. “Early Modern Digital Agendas” will create a forum in which twenty faculty participants can historicize, theorize, and critically evaluate current and future digital approaches to early modern literary studies—from Early English Books Online-Text Creation Partnership (EEBO-TCP) to advanced corpus linguistics, semantic searching, and visualization theory—with discussion growing out of, and feeding back into, their own projects (current and envisaged). With the guidance of expert visiting faculty, attention will be paid to the ways new technologies are shaping the very nature of early modern research and the means by which scholars interpret texts, teach their students, and present their findings to other scholars.
This institute is supported by an Institutes for Advanced Topics in the Digital Humanities grant from the National Endowment for the Humanities’ Office of Digital Humanities. Please visit http://emdigitalagendas.folger.edu/ for more details.
Owen writes that he will be happy to answer questions pertaining to this interesting new project.
One of the exciting turn of events for scholars has been the growing number of unpublished, hand-written documents now available on the world wide web. Textual scholars no longer have to travel to distant countries for view the essential manuscript(s) for their research. Instead, they can now sit themselves down in front of their laptop and display each successive page. This has moved many sources that were once difficult to access into the “completely accessible” category.
But does that make them usable? Despite the desire to make many manuscript collection freely accessible, many digital repositories use “tiled-based” viewers in order to protect unauthorized copying of the collection. This is completely understandable, but those viewers sometimes place limits on how a digital surrogate can be viewed. They can even make it difficult for scholars to extract what they often want most: a transcription of the manuscript’s content. Moreover, the current practice of transcribing from digitized pages can easily permit mistakes to occur. Transcribers currently move from the image to a word processing application in another display window (either on the same screen or on a different monitor). That process can easily mimic the same mistakes that the original scribe could make: haplography (omission of content between similar or identical words; “saut du même au meme”), dittography (repetition of letters or syllables), duplication or omission (of letters, words, or lines), often caused by homoearcton and homoeoteleuton (similar beginnings and endings of words), and transpositions. Could it then be possible to make these digital manuscripts both accessible and highly usable?
T-PEN (Transcription for Paleographical and Editorial Notation) seeks to address both the accessibility and usability of digital repositories. Developed by the Center for Digital Theology of Saint Louis University, in collaboration with the Carolingian Canon Law Project of the University of Kentucky, this new digital tool is a sophisticated web-based application that assists scholars in transcribing these manuscripts. To reduce the likelihood of transcription errors, we took advantage of digital technology to place both the transcription and the exemplar in a manner that minimized the visual movement between the two as much as possible. We accomplished this with a simple but novel visualization of the lines of script in the exemplar, which we integrated with interactive transcription spaces. To build the tool, we developed an algorithm for “parsing” the lines of script in an image, and a data model that connected the image delivery of manuscript repositories with the actions of transcribers.
But we wanted T-PEN to offer more than just a means to ensure good transcription. We had, in fact, three goals in mind:
- To build a tool useful for any kind of scholar, from the digital Luddite to those obsessed with text encoding;
- To provide as many tools as possible to enhance the transcription process;
- To help scholars make their transcriptions interoperable so that those transcriptions would never be locked into the world of T-PEN alone.
After two years of design, development, and intensive testing this tool is now available to the wider public. It was built in the first instance for those working with pre-modern manuscripts, but there is nothing in its design that would prevent early modern scholars from exploiting T-PEN for their purposes. T-PEN is a complex application and to explain every function would take several posts. Instead, I want to provide a brief overview of how someone can set up a transcription project, how they can use T-PEN to produce high-quality work and finally how to get transcriptions out of T-PEN and into other applications or contexts.
Choosing your Manuscript
T-PEN is meant to act as a nexus between digital repositories and the scholar. To date, we have negotiated access to over 3,000 European manuscripts and we are working on further agreements to expand that list. Our aim is to have a minimum of 10,000 pre-modern European manuscripts available for transcription. Even with that number, we will never be able to satisfy all potential users. We therefore enabled private uploads to extend T-PEN’s usability. Many scholars have obtained digital images of a manuscript and they have permission to make use of them for research purposes. Private uploads to T-PEN are an extension of that “fair use.” Users zip the JPG images into a single file and then upload them to T-PEN. These type of projects can only add five additional collaborators (see project management, below), and they can never become public projects. Currently T-PEN can support around 300 private projects, and we are expanding our storage capacity for more.
Transcribing your Manuscript
Once you select your manuscript you can immediately begin your transcription work. T-PEN does not store any permanent copies of the page images, so each time you request to see a page T-PEN loads the image from the originating repository. If you have never transcribed the page before, T-PEN takes you to the line parsing interface. This adds a little time to the image loading as T-PEN parses the image in real time. When it finishes, you will see a page that looks like this:
T-PEN attempts to identify the location of each line on the page and then uses alternating colors to display those coordinates. As you can see, we make no claim of absolute perfection. We worked on this algorithm for almost two and half years and after extensive testing, we’ve been able to promise, on average, an 85% success rate. There are a number of factors that prohibit complete accuracy and so we offer a way for the transcriber to introduce corrections herself. You can add, delete or re-size columns; and insert or merge lines as well. You can even adjust the width of individual lines if they vary in length. You can even combine a number of lines if you want to have them grouped together for your transcription. Sometimes, manuscripts don’t merge well in our modern, rectilinear world: many handwritten texts were written at an angle or were so tightly bound that the page could not be photographed as flat. T-PEN ultimately doesn’t care: what really matters for connecting transcription to a set of coordinates on a digital image. What really matters is that the left side of the line box aligns with the written text. That’s the anchor.
When you are satisfied with the line parsing, you can start transcribing. The transcription interface looks like this:
This interface allows you to transcribe line by line, with the current line surrounded by a red box. There are some basic features to note. First, as you transcribe the previous line is noted above because so often sentence units are split across lines. Transcription input is stored in Unicode and T-PEN will take whatever language set the user has enabled his computer to type. If there are special characters in the manuscript, the transcriber can insert them either by clicking on the special character button (the first ten are hot-keyed to CTRL+1 through 0).
Second, users can encode their transcription as they go. On this aspect, T-PEN is both innovative and provocative. Many scholarly projects that include text encoding often adopt a three-step process: the scholar transcribes the text and then hands it to support staff to complete the encoding, which is finally vetted by the scholar. However, there are many times in which semantic encoding of transcriptions has to include how the text is presented on the page. T-PEN innovatively allows scholars to integrate transcription (with the manuscript wholly in view) and encoding into one step. Often the best encoder is the transcriber herself. That innovation comes with a provocative concept, however. In digital humanities where TEI is the reigning orthodoxy, T-PEN is at least heterodox if not openly heretical. T-PEN’s data model does not expect, nor require, a transcription to be encoded much less utilize TEI as the basis of structured text. Instead, T-PEN treats all XML elements as simply part of the character stream. T-PEN can support transcribers who don’t want to encode at all as well as those who are wholly committed to the world of TEI. For those who want to encode, a schema can be linked to a project to produce a set of XML buttons that can be used in the transcription interface.
For those who simply want to start transcribing, project management will not be that important. For those who envisage a more sustained project (and perhaps a collaborative one at that), it will be vital. There are a number of components in managing a T-PEN project, but here I want to highlight two of them.
Collaboration. Like most digital tools, T-PEN allows you to invite collaborators to join your project. All members of a project have to be registered on T-PEN (but that’s free and requires only providing your full name and an email address). Managing collaboration has three features, of which only a few projects will use all three. There is first adding and deleting project members. Any member of a project can see who is also a member, but only the project leader can add or delete members. A project leader can even have T-PEN send an invitation to a non-T-PEN person and invite them to join (and once they do, they automatically become part of that project).
Second, there is a project log to inspect. This log records any activity that changes the content or parameters of the project. This can be particularly helpful when tracking down how a transcription has changed in a shared project (and a user can display the history of each line in the Trasnscription UI). Finally, projects can make use of T-PEN’s switchboard feature. This is for transcription projects that may be part of a larger project, and where the transcriptions will be aggregated in another digital environment. Switchboard does two things for a project: (1) it allows different projects to share the same XML schema so that all transcriptions will conform to the larger project’s standards; and (2) it will expose the transcription through a web service to permit easy export to the larger project.
Project Options. The two more important options are button management and setting the transcription tools. As seen in the screen shot of the transcription interface, users can use buttons to insert both XML elements and special characters. Those buttons are created and modified as part of the project options. If there is an XML schema for the project, a project leader can link it to the project. Then in button management, the elements in that schema populate the XML button list. The button populator does not discern between metadata elements and elements found in the body of an encoding schema. Users then have to modify the button list to cull the elements that won’t be used during transcription. There’s an additional advantage to editing that list: each button can gain a more readable title. This can be helpful if the encoding schema exploits the varying use of the <seg> or the <div> elements in TEI. When the possible deployment of the tag might be unclear to those with less experience with TEI, a more straightforward title can become a better guide to its use.
Special characters allow the user to identify characters in the UTF-8 system which may not be represented on a standard keyboard. These can be created by entering the correct Unicode value for the character. The first 10 characters are mapped to hotkeys CTRL+1 through 0.
Finally, the set of tools that are available on the transcription interface are set in project options. T-PEN has thirteen tools built-in and most of them were included to assist transcribers of pre-modern manuscripts. Some will be helpful to editors of modern texts. If those tools are unhelpful, then the user can expand that list of tools: all that is needed a name of the tool and its URL. Once attached to the project, the user will be able to access that tool in the transcription interface.
Getting your Transcription out of T-PEN
Digital tools often fall into one of two categories. “Thinking” tools are ones that allow users to manipulate and process datasets in order to test a certain idea or to visualize an abstract concept. They can also allow the user to annotate a resource as a way of processing the scholar’s conception of the object’s meaning or the hermeneutical framework it may require. These tools are invaluable, but they do not easily produce results that can be integrated into a print or digital publication. The second type is what I call the production tool. With these applications, the final objective is to produce something that can be integrated in other contexts. T-PEN falls firmly into this second category—although it has its own annotation tool with which a user can record observations about each manuscript page (and it is compliant with the W3C standard, the Open Annotation Collaboration). Scholars transcribe normally one of three reasons; to create a scholarly edition; to place those transcriptions in footnotes or in the appendices of a monograph; or to integrate an encoded text into a larger resource.
T-PEN supports four basic export formats: XML/plaintext, where the user can filter out one or more XML tags; PDF; RTF which is compatible with most word processors; and finally, basic HTML. For the first one, if the user has attached a header to the project, that header can be included in the export. There is an important caveat here: T-PEN was not designed to be an XML editor. We do offer a basic, well-formedness check (which stops at the first error), but T-PEN does not offer full validation services. Most scholars who encode with T-PEN export their transcriptions to an XML editor for full validation of the file. The last three export formats include some simple transformation for text decoration (italics, bold, etc.). Users can also identify the whole transcription or specify a range based on the pagination (or foliation) of the manuscript.
This post only covers the basics of T-PEN. There are more features available to the user. There is a demonstration video on YouTube where you can walk with one of T-PEN’s research fellows as she begins a transcription project. T-PEN is freely available, thanks to a major investment from the Andrew W. Mellon Foundation and a Level 2 Start-up grant from the National Endowment for the Humanities. So go to t-pen.org and register for an account.