Category: Hellespont

“The World of Thucydides” at CAA 2011

I’m at Heathrow airport waiting to board on a flight to Beijing (via Amsterdam) where I’ll be attending the CAA 2011 conference. To get into the conference mood I though it may be a good idea to post the abstract of the paper that myself and my colleague Agnes Thomas (CoDArchLab, University of Cologne) are going to give within a session entitled Digging with words: e-text and e-archaeology. [This version is slightly longer than the one that we submitted and has been accepted.]

The World of Thucydides: from Texts to Artifacts and back

The work presented in this paper is related to the Hellespont project, an NEH-DFG founded project aimed at joining together the digital collections of Perseus and Arachne [1]. In this paper we present ongoing work aimed at devising a Virtual Research Environment (VRE) that allows scholars to access to both archaeological and textual information [2].

An environment integrating together these two heterogeneous kinds of information will be highly valuable for both archaeologists and philologists. Indeed, the former will have easier access to literary sources of the historical period an artifact belongs to, whereas the latter will have at hand iconographic or archaeological evidences related to a given text. Therefore, we explore the idea of a VRE combining archaeological and philological data with another kind of textual information, that is secondary sources and in particular journal articles. To develop new modes of opening up and combining those different kinds of sources, the project will focus on the so called Pentecontaetia of the Greek historian Thucydides (Th. 1,89-1,118).

As of now, we do not dispose (yet) of an automatic tool capable of capturing passages of Thucydides’ Pentecontaetia that are of importance to our knowledge of Athens and Greece during the Classical period. For the identification of such “links” we totally rely on the irreplaceable, manual and accurate work of scholars. For this reason some preliminary work has been done by A. Thomas to manually identify within the whole text of Thucydides’ Pentecontaetia entities representing categories in the archaeological and philological evidence (e.g. built spaces, topography, individual persons, populations). However, what instead can be done at some extent by means of an automatic tool is extracting and parsing both canonical and modern bibliographic references that express the citation network between ancient texts (i.e. primary sources) and modern publications about them (i.e. secondary sources).

As corpus of secondary sources the journal articles available in the JSTOR and made recently available to researchers via the Data for Research API [3] are being used. Apart from JSTOR classification of such articles into the separate categories of archaeology and philology, those articles are likely to contain references to common named entities that make them overlap at some extent. As an example of what we are aiming to, in Th. I 89 the author refers to the rebuilding of the Athenian city walls – after the Persian War in the beginning of the 5th century BC – as a result of the politics of the Athenian Themistocles. Within our VRE, the corresponding archaeological and philological metadata [4,5] will be presented to the user along with JSTOR articles from both archaeological and philological journals related to the contents of this text passage.

From a technical point of view, we are applying Named Entity Recognition techniques to JSTOR data accessed via the DfR API. References to primary sources, that are usually called “canonical references”, and bibliographic references to other modern publications are to be extracted and parsed from JSTOR articles and will be used to reconstruct the above mentioned citation networks [6,7]. Semantic wise, the CIDOC-CRM will provide us with a suitable conceptual model to express the semantics of complex annotations about texts, archaeological findings, physical entities and abstract concepts that scholars might want to create using such a VRE.

References

[1] The Hellespont Project, <http://www.dainst.org/index_04b6084e91a114c63430001c3253dc21_en.html>.

2] Judith Wusteman, “Virtual Research Environments: What Is the Librarian’s Role?,” Journal of Librarianship and Information Science 40, no. 2 (n.d.): 67-70.
[3] John Burns et al., “JSTOR – Data for Research,” in Research and Advanced Technology for Digital Libraries, ed. Maristella Agosti et al., vol. 5714, Lecture Notes in Computer Science (Springer Berlin / Heidelberg, 2009), 416-419 http://dx.doi.org/10.1007/978-3-642-04346-8_48.

[4] Themistokleische Mauer, http://arachne.uni-koeln.de/item/topographie/8002430

[5] http://www.perseus.tufts.edu/hopper/text?doc=Thuc.+1.89&fromdoc=Perseus:text:1999.01.01999

[6] Matteo Romanello, Federico Boschetti, and Gregory Crane, “Citations in the Digital Library of Classics: Extracting Canonical References by Using Conditional Random Fields,” in Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries (Suntec City, Singapore: Association for Computational Linguistics, 2009), 80–87, http://portal.acm.org/ft_gateway.cfm?id=1699763&type=pdf.

[7] C Lee Giles Isaac Councill and Min-Yen Kan, “ParsCit: an Open-source CRF Reference String Parsing Package,” in Proceedings of the Sixth International Language Resources and Evaluation (LREC’08) (Marrakech, Morocco: European Language Resources Association (ELRA), 2008), http://www.comp.nus.edu.sg/~kanmy/papers/lrec08b.pdf.

Feet on the ground, DB on the cloud

This quick post is just to say how much the UK NGS did save my day today, and probably even a lot more.

For my research project I’m digging into the JSTOR archive via the Data for Research API. And I realised soon to what extent scalability matters when trying to process all the data contained in JSTOR related to scholarly papers in Classics. There are ~60k of them.

The workflow I decided to go for basically consists in retrieving the data from JSTOR, making them persistent via Django (+ MySQL database backend) and then processing iteratively the data. The automatic annotation about those data (mainly Named Entity Recognition) that I’ll be producing is to be stored in the same Django DB.

After having ran the first batch to load my data into my Django application the situation was as follows: 7k documents processed and DB size of ~600MB. By the end of my data loading process the DB will grow up to approximately 6GB (just the data, without any annotation). And it’s at this stage that the cloud (or the grid) comes in handy.

I run my process locally but the remote DB is somewhere on the NGS grid (in my case it’s on the Manchester node). This is of great relieve to my and my machine of course in terms of disk space, speed in accessing the DB and system load. Whenever I need I can dump the DB and installing it locally in case I find myself in the need of accessing it and without an internet connection. Not to mention the fact that the batch processed to load the data could be ran from the grid. Finally, to give public access to the data I’m using  the same django application that pulls out the data from the remote MySQL db.

Having free access to the national grid as UK researcher is absolutely essential, also for someone – like me – who does not work in one of those fields that are known to be benefitting most from grid infrastructure. Even if digital I’m nevertheless still a humanist.