Category Archives: First Semester Blog

Final Thoughts on My Shuffle Along Project

Site link: http://richardcarlin.net/mykitchen/exhibits/show/shuffle-along–tours-1921-1923

In creating the two tour maps for Shuffle Along, I was hoping to provide an engaging, easy-to-use representation of the challenges and opportunities faced by African-American theatrical companies touring the country in the 1920s.  I believed it was important to provide contextual information in a narrative form, accompanied by contemporary visual images, to place both the original (A Company) and spinoff (B Company) tours in context.

As I discovered through experimentation with kepler and Neatline software packages, it is important to select the correct software to achieve your goals.  For the A Company–which toured in a more limited area–I wanted to show the timeline of dates along with full information (including photos of theaters when available and clips of early reviews) in an easy-to-use format for the site visitor.  I found Neatline to be the best solution for this task, particularly because of its embedded timeline feature that allows you to quickly locate each appearance.  On the other hand, for the B Company where I was more concerned with showing the geographic range of the tour, kepler worked better.  As with all software packages, I found that I needed to continue to experiment in order to improve the visual representation offered by each platform.  Comments from other students were very helpful in this area and I appreciate their input as I worked to develop my idea.

I already knew going into the project that the A Company played major cities like New York, Boston, and Chicago for extended dates, but avoided any appearances south of Washington, DC, although they still encountered difficulties in obtaining food and lodging due to contemporary racism.  The B Company, on the other hand, mainly appeared in one or two-night stands and toured more widely, although in many cases they appeared in segregated theaters.  Nonetheless, by mapping these appearances I was given new appreciation for the struggles each faced as well as their achievement opening the door for black entertainers on Broadway and beyond.

As with all DH projects, the next step is twofold: To continue to support the site by building more features to it and expand the story to embrace Eubie Blake’s full career; while also determining a social media strategy that will reach the core audience interested in the subject of theatrical history, African-American culture, and the history of American music.  I look forward to embracing these challenges.

Promoting My Shuffle Along Project

Audience: Who is your strategy aiming to reach?

I’m trying to reach scholars and the general public interested in music history; African-American culture; Broadway and theater history.  I’m also hoping to help promote my recent biography of Eubie Blake published by Oxford University Press.

Platform(s): What social media tools do you plan to use to reach this audience?

The blog will be the main tool but it will link into my existing Facebook page for the book (https://www.facebook.com/Eubie-Blake-Rags-Rhythm-and-Race-100868321721599).  I would also cross-post to other Facebook groups of interest and also groups on Humanities Common.

Messages: What message will appeal to this audience? What do you want to convey? What action do you want them to take?

I’m looking to engage with the broader scholarly and public community interested in the history of African American music, theater, and culture.  Because 2021 will be the 100th anniversary of the opening of Shuffle Along, I believe this is a good time to promote the importance of its contribution to the history of American theater and culture.  I would like to tie in to the anniversary year as a way of drawing broader attention to the show.

Measure: How will you measure the success of your strategy? Consider using SMART goals (Specific, Measurable, Attainable, Realistic, and Time-bound) to frame your responses.

I would be hopeful to build the following of my blog to at least several hundred followers (the current Facebook page that I host gets about 250-300 hits for each posting) within the first six months of the posting of the material.

Crowdsourcing

Crowdsourcing offers archives that give online access to scanned documents and other materials the ability to enlist assistance from the general public in performing tasks that would usually handled by in-house staff.  At the current time, this primarily focuses on transcribing digital documents or annotating digital scans.  Crowdsourcing has the advantages of drawing new users to a site; expanding the “reach” of a collection beyond its core users in the academic or specialist community; and helping achieve the goal of efficiently transcribing or otherwise annotating documents or other digital images without the investment of hiring outside freelancers or enlisting current staff.

For our assignment, we looked primarily at sites that use crowdsourcing to help transcribe hand-written documents (The Collected Works of Jeremy Bentham and Papers of the War Department) and correcting machine-generated transcriptions (Trove), and annotating digital imagery (NYPL Building Inspector).  The findings of all these sites were remarkably consistent: most of the work is done by a small coterie of “power users”; and users tend to be highly educated, retired, and driven by a sense of serving the “common good.”  While many institutions initiate a crowdsourcing project because they lack the budget or manpower to do the work on their own, I thought it was telling that the one site that evaluated the cost benefits of crowdsourcing (Jeremy Bentham papers) found that the money spent on hiring two project managers to oversee the work of the volunteer transcribers could have been better spent on just having the two managers perform the transcriptions work themselves.  Of course, this doesn’t factor in the cost of having outside editors review the work of the two managers, which would also be necessary.

Indeed, crowdsourcing sites must be carefully designed to be easy to use, with few barriers for participation, otherwise few will complete the work.  Further, full time curators are needed to assist the volunteers, which—as the Bentham experience shows—is not inexpensive.  The site management and design can be quite costly to implement, and there is  not much information yet on the long-term benefits of this approach.  Will people continue to be engaged with a site for a sufficiently long period to transcribe what are often massive amounts of papers or annotate a great number of digital scans?

My own major motivation to participate in these activities was if I had some interest in the content itself.  The task of transcribing is fairly tedious, and the handwritten documents are difficult to read. Then again, there is a willingness among those who are fascinated by the subject matter to perform what can be time-consuming work.  I am personally skeptical of the thinking behind NYPL’s Building Inspector project that individuals will use their spare time waiting on line to correct the tracing of the footprints of buildings on old fire insurance maps. This is not the kind of engaging “gamification” that one finds on Candy Crush or similar addictive apps and websites.  It will be interesting to see over time if enough material is reviewed to achieve the project’s goals.

Reading Wikipedia

Although most people use Wikimedia like a traditional encyclopedia to answer factual questions, not many are familiar with how each entry is created and what this may mean in terms of its accuracy, bias, and reliability.   Many have heard the term “crowdsourcing” but may not understand that it can have different meanings depending on the formal and informal rules and regulations used in its implementation.

Although Wikipedia was founded on the idea of “crowdsourcing”–that each entry would be written and revised by its users–there is a good deal of policing of the site that occurs through a group of editors and guardians who enforce certain organizational rules that have evolved over time.  There is also a good deal of sensitivity to weed out spammers or those promoting a specific bias or point of view, particularly those who may be promoting their own work.  This has led to controversy as some newer users accuse the “old garde” of limiting their contributions.  Users can even be blocked from the site by site editors if they feel they are not following the rules.

Nonetheless, Wikipedia does offer a good deal of transparency to the editorial process, mostly through the ability to examine the “History” of each entry.  Taking the Digital Humanities entry as an example (https://en.wikipedia.org/wiki/Digital_humanities), the user can track the history of the entry back to its origins in 2006 when it was begun by DH librarian at Stanford University, Elijah Meeks.  Each change made over time can be examined individually, with the ability to compare the changed text with the previous version.   This is most illuminating in this entry as it shows how–not surprisingly in a new field–the definition of what constitutes DH has expanded over time and this has led to many new types of projects and approaches being embraced by the field.

Another key feature of Wikipedia is you can find out the background of many of the contributors by clicking on their name in the history tab.  Some choose not to create a biography or never “log in” as users, but many at least offer a generic biography that points to their background.  Not surprisingly, in this field that is dominated by academic discord, most of the major content providers to this entry are academics themselves who work in the DH field.

Another key feature is the requirement that all factual information be sourced.  The DH entry offers 94 footnotes and an extensive bibliography.  This encourages the reader to go beyond this entry to engage more fully in the discussions and debates in the field.

This type of analysis is most appropriate for those who are seeking to expand their study of a topic beyond the basic “just the facts” approach that is offered by Wikipedia–and indeed any encyclopedia.  Encyclopedias are best for answering basic factual questions–although even simple facts like birth dates can be contested–but are not be-alls and end-alls for research.

Three Types of Visualization Software: Voyant, kepler.gl, and Palladio

Three open-access tools for the Digital Humanities offer different ways to analyze large collections of information.  For this class, we used the WPA Slave Narratives as digitized by the Library of Congress as source material to explore the functionality of each piece of software and its usefulness as an analytical tool.

Voyant is designed for text mining to create visual representations (graphs and word clouds) of the common terms found across a large database of source material.  It is most useful to analyze the common language and topics that occur across a large collection of source material.  It is relatively easy to use and easy to “toggle” between views to understand the prevalence of common terms across a large dataset and within identified subsets or collections within a larger dataset.  Like all of three software programs, the quality of visual representations relies on the extent of the source material and understanding how it was created.  I think this is an excellent tool for evaluating literary works and other “fixed” sources where the author’s intent is clearest.

kepler.gl is a mapping software.  I found it the most cumbersome to use as a non-techie person, although being able to generate geographic visualizations of the sources of large datasets like the Slave Narratives is very useful.  Because my interest is less in mapping patterns across a geographical area and more in the relationships of the material itself, I found this produced the least useful visualizations at least with the dataset that we were using.

Palladio is a visualization tool that focuses on relationships between datasets, such as person and location or type of worker and topics discussed.  I found this to be an easy-to-use software that was most illuminating when focusing on interpersonal and subject relationships.  For geographic relationships, kepler.gl would be more useful.  Again, because my interest tended to be more topic-oriented than location-oriented, I found the visualizations to be easy to read and to understand and useful for my understanding of the source material.

Palladio

Palladio is an easy-to-use online software that allows you to map relationships between two focus areas in a data set.  It is most useful for illuminating relationships between subjects (such as influence networks: teacher-student; mentor-mentee; etc.).  It allows a good deal of flexibility in the creation of graphs and their manipulation so the user can see quickly how these relationships played out within the data set being studied.

I found it most illuminating for the type of interpersonal relationships that Humanities scholars often wish to study–how one writer might have influenced another; musical influence networks; etc.  While geographic location could be mapped, this was mostly useful when the number of physical locations was smaller; a rendering of this information on a mapping software would be more revealing.

For my own use, being able to understand how different creators interacted with each other is very valuable–possible unknown relationships can be revealed and verified through further study of the source material.  To be truly meaningful, however, the source material should be as rich as possible; otherwise some key relationships might be missed.  Understanding the limitation of your data set is always important for making the best use of a tool like Palladio.

Kepler Map

I used Kepler Maps working with the database of the Library of Congress Slave Narratives, focusing on the place of birth of each informant and where they were interviewed for the project.  Working from the assumption that the place of interview–while important–was probably not where the person lived when he/she was enslaved, I found it useful to include the birthplace information.  Of course, not all informants necessarily remained at their birthplace through their period of slavery but at least this gave a broader picture of the experiences of the informants and how broadly through the South they were originally spread.

I found the Kepler software not particularly easy to use and somewhat clunky particularly when it came to exporting maps and also the need to reload the database each time I returned to the software.  I suppose this might be due to its online, open nature; perhaps there is a downloadable version that enables you to save personal maps and data more efficiently.

For a new user, I would plan having enough time to complete a full map and save it in some form in one sitting to avoid having to reproduce work already performed.  I also imagine that if cost were not an issue there might be easier-to-use mapping software that would allow for more flexibility to the  user.