I had spent four days in DH2015 and I hadn’t really chosen the sessions as a historian or a philologist in me would have wanted. No, there wasn’t anyone in my organization, who would have prompted me to participate any precise session in particular, but when going to the conferences, I tend to attend the sessions, which could provide some new information for my home organization in return. By intention, I chose the sessions of the last day according to my own interests and finally I was picking cherries too.
The last day of DH2015 was kicked off with in interesting presentation by Joris Van Zundert from the Huygens Institute (When Is Coding Scholarship And When Is It Not?). I find it hard sometimes to communicate with the programmer of my project and his views gave me a couple of hints, how to approach them in the future.
The key question for Van Zundert, in my interpretation, was that since lot of scholarly decisions are made by the code, thus it is vital to understand what’s inside. In the DH field, it is probably hard to combine the textual criticism and critical code studies. According to Van Zundert, there’s a distinction between scholarly code and code for scholars and there’s a merging trend to combine two fields, but it takes two to tango, so people in DH (researchers) should get more acquainted with code and people who are developing the code, shouldn’t define themselves as mere servants for scholars, as they have been educated to do.
Rhian James and Paul McCann from the National Library of Wales showcased their digital collections, projects linked to them and new platforms they are utilizing in a paper, Beyond the Library Walls: The National Library of Wales Research Programme in Digital Collections. As in Finland, their Newspapers collection was open to the public and I learned from the discussions that isn’t the usual case in the UK. It was also great to discover what sorts of projects the Welsh National Library had executed recently. A couple of ideas could easily be stolen to Finland too. Cymru1914 and Wales at War projects seemed to me quite a nice way to engage people. Well done!
After the “coffee” break (was it today a tad stronger than yesterday?), I attended the most philosophical paper session of DH2015. Some aspects in humanities were seriously discussed under the baton of Diane Janacki.
The first speaker, Sayan Bhattacharyya from University of Illinois (Approaching Textuality with the Metaphor of the Digitized Workset), exceeded all my expectations with his speech. The DH filed is often (too often) runned by the data, technology and the code, so this Bhattacharyya’s speech was a welcomed antipode to the discussion. Van Zundert spoke about the combining the code with scholarship, whereas Bhattacharyya’s approach started from the other corner of the field: “[w]hile much of digital humanities considers the affordances provided by computational tools, a less noted and more serendipitous consequence of its emergence is the availability of new metaphors for rethinking perennial philosophical questions.” He discussed about the workset as a metaphor. I believe that if Van Zundert and Bhattacharyya would sat for a pint together, they could make something beautiful out of coding. Bhattacharyya’s speech was a defend of code criticism from the philosphy’s point of view. A I like the reference to Roman Jakobsen and Linguistic Circle of Prague. Thanks!
Frank Fischer (When Does (German) Literature Take Place? – On the Analysis of Temporal Expressions in Large Corpora) from Göttingen presented the project that had analyzed the temporal presence in literature. This means that they had digged into available data, tagged the temporal expressions, created the analysis tool. I was thrilled to see the graphs, which sort of witnessed that German literature takes place more in May than in any other months. According to Fischer’s data, the temporal locus for the English literature is a bit later, whereas Tolstoy can be set to autumn. Also, there’s a mobile application (Tiwoli) available for Android and iOS for the world’s literature, based on the Wikipedia datasets. I installed it and I had a nice time with it. Just one request: could there be faceting options for national literatures?
After the Fischer’s presentation, we spent a bit of time at the cultural milieu of Paul’s Cross Churchyard in London. Thomas Winn Dabbs’ paper ‘Nothing That Is Not There and The Nothing That Is’: Tracking the Digital Echoes between Churchyard and Theatre in Shakespeare’s London) was made from the book history’s point of view. Dabbs discussed how digital tools could be used in an integrated fashion to map the public reception of new print and drama during the Elizabethan period. In Dabbs’ opinion, the digital solutions could provide new assumptions on les lieux de memoire, in this case about the cultural environment of St. Paul’s Cathedral, which once was ‘there’ even if it is currently or ostensibly ‘not there’.
Icing on the cake was the paper by Dana Milstein on ambiguity! On ambiguity! No bullshit on difficulties with crowdsourcing, visualization methods, bad OCR quality or transcription accuracy, but hard talk on ambiguity in texts! Gosh! Milstein et al. had created an algorithm that gives you a calculated value on ambiguity in texts. The algorithm identifies the ambiguity factor, enables the production of heat map, identifying the potentially problematic areas of ambiguity and top of all, it is standardized. In addition to all this, Milstein also presented a couple of possible use case in the real life: this algorithm could be used, not only in research, but in journalistic and contract analysis too. Sounds like practical to me and that’s what the DH should be.
I must confess: I didn’t attend the keynote of Tim Sherratt. I have understood, that I was magnificent. I don’t know, because I was having a walk on Manly Island instead. You may judge yourself: the slide can be retrieved here: http://discontents.com.au/unremembering-the-forgotten
This was the last wrap-up from DH2015 and I don’t even bother to proofread it. You are eventually educated enough to spot the mistakes, but like in crowdsourcing, the main thing is to do things, not avoid the errors.