Enhancing and Extending the Digital Study of Intertextuality (pt. 2): Revealing Patterns of Intertextuality in Corpora of Secondary Literature

[This is the text of my presentation at the Digital Classics Association panel titled Making Meaning from Data at the 146th Annual Meeting of Society for Classical Studies (was American Philological Association) in New Orleans. Unfortunately I wasn’t able to make the conference, so I’ve recorded it. ]

Digital Study of Intertextuality

In this paper, divided in two parts, we consider two approaches to the digital study of intertextuality.

The first one–that was just presented–consists of developing software that allows us to find new possible candidates for parallel passages.

The focus of my talk is on the second approach, which consists of tracking parallel passages that were already “discovered” and are cited in secondary literature, meaning commentaries, journal articles, analytical reviews, and so on and so forth.

What I’m going to present today is essentially what I’ve been developing during my PhD at King’s College London at the department of Digital Humanities.

The Classicists’ Toolkit

The Indexing of citations in itself is nothing new: it has been done for centuries in different forms such as indexes of cited passages at the end of a volume–the so called “indexes locorum”–and subject classifications of library catalogues.

The main problem is that creating an index of citations which is accurate and at the same time very granular is extremely time-expensive. (and by granular I mean precise down at the level of the cited passage)

Therefore, the tools that are more precise and granular usually cover a smaller set of resources, whereas the tools with high coverage–such as a full text search over google books for example–are less precise and less granular. The automatic indexing system that I’ve developed tries to combine together high coverage and fine granularity. In its current implementation the system is not 100% accurate but this is something that can be improved in the future.

Citation Extraction: Step 1, (Named Entity Recognition)

Before considering the result of the automatic indexing, let’s see very briefly now how the system works.

The first step is to capture from a plain text the components of a citation, which are highlighted here in different colours.

Citation Extraction Step 2: (Relation Detection)

The second step is to connect these components together to form citations. “11,4,11” and “11,16,46” for example, both depend from the reference to Pliny’s naturalis historia. Each of these relations constitutes a canonical citation.

Citation Extraction Step 3: (Disambiguation)

Finally each citation needs to be assigned a CTS URN. CTS URNs are unique identifiers to refer to passages of canonical texts (Charlotte Roueche, if I remember correctly, once defined them as Social Security Numbers for texts).

Mining Citations from APh and JSTOR

I’ve used this system to mine citations from two datasets: the reviews of the L’APh and the articles contained in JSTOR that are related to Classics. I don’t want to go into the debate concerning big data in the Humanities. But the data originating from these two resources was already too big for me, given the limits of a PhD, so I’ve selected two samples: for the APh I worked on a small fraction of the 2004 volume–some 360 abstracts for a total of 26k tokens and 380 citations–whereas for JSTOR I’ve focussed on one journal, Materiali e Dicussioni per l’analisi dei testi classici–which alone contains some 660 articles published over 29 years for a total of 5.6million tokens.

From Index to Network

The digital index that is created by mining citations from texts is not substantially different from indexes of citations as we already know them. Well, the scale is different, given that such indexes can be created automatically from thousands of texts.

But at the same time, the fact of representing this index as a network changes radically how we can access and interact with the information it contains.

The main difference, which is especially relevant for the study of intertextuality, is that cited authors, works and passages are not shown in isolation as in an index, but the relations that exist between them can be measured, searched for and visualised.

From Texts to Network

The citations that are extracted from texts are transformed into a network structure. In order to analyse patterns at different levels I created three networks characterised by a different degree of a granularity.

The macro network has only two types of nodes: first, the documents that contain the citations–in this case the green nodes that represent abstracts of the L’APh; and second, the cited authors, the red nodes. A connection between two nodes represents a citation. In the example here, these two precise citations are represented as a connection between the citing documents and the two cited authors, Pliny and Vergil.

The meso network is more granular: in addition to the cited authors also the cited works are displayed. In this example, the Naturalis Historia and the Georgics are represented as two orange nodes.

Finally, at the micro-level the network contains also single cited passage in addition to authors and works.

APh Micro Level

[interactive visualization available at phd.mr56k.info/data/viz/micro]

The micro-level network, which is shown in this slide, is too granular to let certain patterns emerge, but it’s extremely useful in other cases, for example when searching for information.

This network tends to be very sparse, meaning that nodes are not highly connected with each other, and few documents are citing the very same text passage (and are therefore connected). At the same time, this sparseness makes the few connections that are present extremely valuable. In fact, such a sparse network is very useful especially when searching for publications that are related to a specific text passage or publications that discuss a specific set of parallel passages.

These two documents, for example, are likely to be closely related to each other as they both cite the same two passages from the third book of the Georgics. And the same is true for these other two documents both containing a citation to line 9 of Aristophanes’ Acharnians.

APh: Macro Level

[interactive visualization available at phd.mr56k.info/data/viz/macro]

This other slide shows the macro-level network which is created out of the citations extracted from the L’APh sample. The size of the red nodes, which represent ancient authors, is proportional to the number of citing documents, whereas the thickness of the connections between nodes depends on the number of times the author is cited. The isolated, faded out nodes are the documents from the L’APh without citations and are displayed here just to give an idea of their relatively small number.

When looking at the same network of citations, but at the macro level, the overall picture looks very different. Looking at it from this perspective it is possible to see already the centrality of Vergil, with 29 citing documents. Similarly, what emerges is a group of abstracts that discuss Aristophanes in relation to Euripides. This is not at all surprising, but it emerges clearly and nicely from this macro-level network.

APh: Meso Level

[interactive visualization available at phd.mr56k.info/data/viz/meso]

The meso-level network provides some more information concerning which authors are cited, but without getting as granular and sparse as the micro-level network.

The second aspect in which citation networks differ very much from the traditional indexes of cited passages is the quantitative analysis they allow for.

This diagram is an example of this kind of analysis and shows the number of citations to the 5 most cited authors plotted over time. Here I’ve chosen the 5 most cited author, but one could choose a specific set of authors–for example Lucan, Vergil and Ovid–and analyse how the attention they received varied over time.

This example has some clear limitations: first, some errors of the citation extraction system have resulted in a high number of citations of the Appendix Vergiliana and second, this graph is based on the citations contained in just one journal, but the results would be much more interesting if the whole JSTOR was considered.

Mining Citations from JSTOR and APh (part I)

This is the first of a series of posts related to my ongoing work on mining citations to primary sources  (classical texts) from modern secondary sources in Classics (e.g. journal articles, commentaries, etc.). Originally, I just wanted to rant about the frustrations of processing OCRed texts but then I’ve decided to turn my rant into something more interesting (hopefully) for the readers…

Let’s first explain how I got here. A substantial part of my PhD project is devoted to answering the question “how can we build an expert system able to extract citations to ancient texts from modern journal articles?”. The fun bit, however, comes a bit later and precisely when I start reflecting on which changes may take place in the way classical texts are studied once scholars have at their disposal a tool like the one I’m trying to develop.

So far I have talked about classical texts and journal articles but let’s be more precise. Being the time of a PhD by definition limited (and in some countries more than in others) I had to identify a couple of text corpora for processing.

I have decided that such corpora also had to be:

  1. meaningful to classical scholars
  2. large enough to represent a challenge as for the processing and data analysis
  3. different from each other so to show that the approach I came up with  is, to some extent, general or at least generalisable.

The Clean and the Dirty

The two corpora that I’ve been looking at are respectively the bibliographic reviews of the L’Année Philologique (APh) (the Clean) and the journal articles contained in the JSTOR archive (the Dirty, as in dirty OCR, that is). Among the things they have in common what matters most to me is that they both contain loads of citations to primary sources, that is ancient texts, such Homer’s Iliad or Thucydides’ Histories, just to name two obvious ones.

But the two corpora differ in may ways. There are differences related to the type of texts: APh contains analytical, as opposed to critical, summaries of publications related to Classics and the study of the ancient world, whereas JSTOR contains the full text (OCR) of journals, a subset of which relates to Classics. This implies also a difference in document length: compared to an JSTOR article, an APh summary is typically of about the same length of an article abstract. Moreover, the two corpora are very much different as for data quality: cleanly transcribed texts pulled out from a DB on the one hand and OCRed full text, dumped into text files without any structural markup whatsoever on the other hand.

What this means in practice is that, for example, running heads and page numbers become part of the text to be processed, although it would be desirable to be able to simply filter them out. But this is not critical. Definitely more critical is what happens to the footnotes: there is no linking between a footnote and the place in the text where this occurs. This is critical for the work I’m doing because, for example, is very common to make a claim, footnote it and then populate the footnote with citations of primary sources that back that claim up. This means that, assuming that my system will make it to extract and identify a citation occurring within a footnote, such citation will still be de-contextualised unless we find a way to related it to the footnoted text (but this goes beyond the scope of my work, at least for the time being).

But there’s an even more fundamental yet quite thorny problem to solve…

A Thorny Problem: Sentence Segmentation

My problem was that the first corpus I have processed was the APh, the one with small-sized, clean chunks of text. One problem that I did not have when working with that corpus was what is usually called sentence segmentation, sentence splitting or identification of sentence boundaries, that is identifying the sentences of which a given text is made of.

Do I really need to split my texts into sentences? Well, yes. The sentence is important because is the fine-grained unity of context where a given citation occurs, as opposed to the “global” context of the document containing that sentence or of the corpus containing the document. And the same text passage can be cited is so many different contexts–it can be cited in relation to a specific use of the language (grammar or syntax, in relation to its content, etc.)–that we really need to be able to capture it.

Why did my system fail to split JSTOR articles into sentences? One step before: what did I use to perform this step? The software I was using was the Punkt Sentence Tokenizer contained in the Python-based NLTK framework. Now, one of the typical causes of troubles for this tool is the presence of abbreviations. And in my case, as I said already, there are loads of them. Basically what happens is that whenever a string like “Thuc. 1. 100. 3” occurs, the algorithm wrongly inserts a new sentence after “Thuc.”, “1.” and “100.”. As you can imagine, given that my main goal is precisely to capture those citations, this seriously undermines the accuracy of my system as it happens at the very beginning of the data processing pipeline.

What to do about this? What I did so far–just to see how big is the improvement in the accuracy of the citation extractor once the sentence boundaries are fixed, and it is remarkable!–was manually to correct the result of the algorithm. But given the scale of the corpus this is no feasible and/or scalable approach: there are approximately 70,000 papers, with hundreds of sentences each, that are waiting to be processed.

Another possible solution is to compare other available libraries (e.g. LingPipe) to see if any of them performs particularly well on texts that are full of citations. The last approach that I can think of is trying to train the Punkt Sentence Tokenizer with a bunch of manually corrected JSTOR documents and see how big is the improvement: of course what I will need in my case is to see a sensible improvement with a relatively small training set.

Oh no, wait, there is another option: resort to the magic power of regular expressions to correct the most predictable mistakes of the sentence tokenizer and see if this does the trick.

Skosifying an Archaeological Thesaurus

Background

I am particularly happy about this blog post as it relates to the work I have been doing, since slightly longer than a year, within the project DARIAH-DE, that is the German branch of the EU-funded DARIAH project. In this project I am currently working on a set of recommendations for interdisciplinary interoperability.

Interoperability, generally speaking, can be used just as a buzz word or can mean something closer to fully flagged Artificial Intelligence–from which, I believe, we are still quite far. Being very much aware of this, and of the several existing definitions of interoperability, within our project we are trying to keep a pragmatic approach. This blog post describes a use case on which we have working recently describing how greater interoperability can be achieved by following some best practice recommendations concerning licenses, protocols and standards.

In a Nutshell

If you are wondering what is actually this post about–and probably whether you should read it or not–here is a short summary. I will describe the process of transforming a thesaurus encoded in Marc 21 into a SKOS thesaurus–that’s what is meant by skosification–in a way that does not involve (much) human interaction. The workflow relies upon an OAI-PMH interface, the Stellar Console and an AllegroGraph triple store where the SKOS/RDF thesaurus is stored.

Sounds interesting? Keep reading!

Legacy data

The data used in this use case come from Zenon, the OPAC of the German Archaeological Institute, and specifically from Zenon’s thesaurus. This thesaurus is stored as Marc 21 XML and is made available via an open OAI-PMH interface (here you can find the end-point).

Such a thesaurus is an essential tool for browsing the content of the library catalog: each entry is assigned one or more subject terms that are drawn from the thesaurus. The image below shows the thesaurus visualized as bibliography tree: Zenon users, and probably many archaeologists, find this and similar Information Retrieval tools of extreme importance for their daily work.

OK. How can I get the raw data?

This is one of the typical interoperability bottlenecks. A classical scenario looks as follows:

  • Jane has some data
  • Bob wants to use Jane’s data
  • Bob [phones | writes an email to] Jane asking
    • the permission to use her data
    • the actual data
  • Jane sends Bob [the data | a link to download them]
  • data change and Bob’s version is now out-of-date

In the case of Zenon’s thesaurus things look quite different as all data are accessible via an OAI-PMH interface which allows one to dowload–by means of few lines of code–the entire data collection, without further need for human-human interaction and in a way that the process can be repeated at any time.Without bothering much Jane every time by phone or email.

This latter aspect becomes even more important when data tend to change over time as it is the case of Zenon’s thesaurus. This is the main difference between negotiated interchange and interoperation, as Syd Bauman puts it here, and is also the reason why the OAI-PMH protocol is an essential piece of an interoperable architecture.

Downloading the thesaurus records as Marc 21 XML becomes as easy as running the following command from the console:

curl "http://opac.dainst.org/OAI?verb=ListRecords&metadataPrefix=marc21&set=DAI_THS"

Re-usable tools

However, this use case would have never been possible without the Stellar Console, a freely available and  open source piece of software developed by Ceri Binding and Doug Tudhope in the framework of the AHRC-funded project “Semantic Technologies Enhancing Links and Linked Data for Archaeological Resources” (STELLAR).

I came across this tool last year at the CAA 2012 conference in Southampton where Ceri gave a paper and performed a live demo of the software. The key idea underlying the Console is to accept the simplest–or at least a rather simple–input format such as CSV in order to produce a more structured and semantic output such as SKOS/RDF or CIDOC-CRM/RDF by applying a set of (customizable) templates.

The Hacking Bit

My main task consisted in writing a short script–approximately a hundred lines of Python–to a) harvest the OAI-PMH repository and fetch the ~80k records of the thesaurus and b) to produce a CSV output to be fed into the Stellar Console.  I have put all the code, including input/intermediate output/final output into the skosifaurus repository on github.

Apart from harvesting the OAI-PMH end point and spitting out some CSV, the script performs also a mapping between Marc21 fields and SKOS classes and relationship–in the code repository you can also find my notes in case you are interested in the gory details of this mapping.

In order to figure out the correct mapping between Marc and SKOS I went repeatedly to see the librarian at the DAI. Not only this turned out to be extremely helpful, but it was absolutely necessary for at least two reasons: first, my poor knowledge of library standards; second, because Marc21 lends itself to be used in slightly different ways. In this sense, Marc21 as a standard enables syntactic interoperability but only partly semantic interoperability: in other words, there is no guarantee that two thesauri both encoded in Marc21 will use precisely the same fields to encode the same kinds of information.

What didn’t Work?

Well, things mostly worked smoothly. However, there was a small problem related to the text-encoding format which puzzled me for some time. To understand the problem is important to point out that the Python script was run on a Mac  OS platform whereas the Stellar Console on a Windows one, as currently it works only on such platform. At this point one might say: “but what’s the problem if you use Unicode?”.

Funnily enough, the problem lied precisely in the way the CSV file was read by the Stellar Console. In the first version of the script the lines that write the CSV file to memory looked like this:

file = codecs.open("thesaurus_temp.csv","w","utf-8")
file.write("\n".join(output))

This works in most cases. But if the file that you are writing is to be processed on a Windows environment–for whatever reason you may want (or have) to do so–you should use the following code instead, just to be on the safe side:

file = codecs.open("thesaurus_temp.csv","w","utf-8-sig")
file.write("\n".join(output))

The reason, which is exquisitely technical, is that Microsoft uses a special sequence of bytes, a sort of Byte Order Mark (BOM), that is pre-prended to an UTF-8 encoded file, to let the software understand in which format is the file encoded. Without that character sequence the file won’t be open correctly by some software (e.g. MS Excel). You can read more about this in section 7.8 of the documentation for the Python codecs library.

Also the Stellar Console is affected by this issue and without this byte sequence any uft-8 encoded file won’t be open correctly in the Console, thus resulting in the content of the output file being crippled.

 

The SKOSified Thesaurus

To sum up the whole process:

  1. I ran a Python script (source) which harvests ~80,000 Marc21 XML records from DAI’s OPAC via its OAI-PMH interface (end-point here);
  2. this script is then producing an intermediate CSV output (file) according to a Marc2SKOS mapping that I’ve defined (further details here);
  3. the intemediate CSV file is fed into the Stellar Console which spits out an RDF serialization of the SKOS Thesaurus (RDF/XML version, RDF/turtle version).

To get a taste of the final result, here below is an image showiing what the SKOS thesaurus looks like when visualized within Gruff (Gruff is a client for the Allegro Graph triple store):

But if you are interested in further techy details on this topic please stay tuned as I will blogging about it in a follow-on post!

Programme of the CAA2012 session “LOD for the Ancient World”

As I have written in a previous post, I will be chairing together with my colleague Dr Felix Schäfer a session at CAA 2012 on Linked Open Data for the ancient world. The session (code Data1) will take place in the morning of March 27th and our  programme looks really interesting and with a good balance of theoretical reflections and practical applications of LOD in Archaeology.

These are the 5 papers –given in no particular order — to be presented (you can also bookmark the delicious stack created by the organisers):

We hope to see you there! Otherwise watch his space for updates as I will be posting materials before and after it.

Linked Open Data for the Ancient World at CAA 2012

This year the Computer Applications and Quantitative Methods in Archaeology (CAA) conference will be held in Southampton (26-30 March 2012). I will be chairing, together with Dr. Felix Schäfer (Deutsches Archäologisches Institut, Berlin) and Dr. Prof. Reinhard Förtsch (CoDArchLab University of Cologne), a session on Linked Open Data for the Ancient World. 

This session aims to explore the opportunities, challenges and methodological consequences related to the Linked Open Data approach for the study of the ancient world. We welcome multi-disciplinary submissions dealing with the following or related aspects of Linked Open Data: URIs for Cultural Heritage objects, methodological consequences of LOD, projects publishing data as LOD, relevant tools and live applications based on LOD, digital libraries and their content in relation to ancient world objects, other approaches of making data interoperable and interlinked.

The deadline for submission has been extended to Dec 7 (11:59pm GTM). Here you can find more details about the conference and read the call for paper, and there you can submit your abstract.

Linked Open Data for the Ancient World (abstract)

[session code: Data1]

The study of the Ancient World is by nature a rich soil for the adoption and exploitation of the Linked Opden Data (LOD) approach. Indeed its long tradition, the diversity of materials and resources as well as the high level of disciplinary specialisation lead to a situation where silos of knowledge, even when available online and under open access licenses, are isolated from each other. This situation is also reflected by the segmentation that the study of the Ancient World has reached with the inevitable tendency to favour one single perspective in despite of others. On the contrary, the LOD approach allows us to integrate heterogeneous sources of information by means of links and persistent identifiers while preserving the disciplinary specificity of data.

The recent adoption of the LOD principles by projects such as Pelagios [1], SQPR [2] and the British Museum [3], in acceptance of the CIDOC-CRM’s Linked Open Data Recommendation for Museums [4], are important steps towards a future of interoperable data in archaeology and classics. There is a variety of ways in which different resources are related to each other: an inscribed stone, for instance, will be linked to the edition of the text, to the building and location it belonged to, to different photographs of the object, to a record in the museum catalog and to related literature. Having those different pieces of information interconnected would allow us to overcome to some degree the mentioned fragmented view on antiquity by rendering a more wholistic image of the past.

In this session we shall discuss the advantages and disadvantages of LOD for the study of the Ancient World, look at available data, existing tools and live applications (beyond the status of being testbeds) and question which steps should be taken to overcome existing obstacles to increase the amount of LOD. Furthermore we welcome reflections on the opportunities, challenges and methodological consequences for the disciplines involved. In continuity with past sessions of the conference on related topics, this section addresses issues including but not limited to:

* URIs for Cultural Heritage objects

* methodological reflections on consequences of LOD

* experiences of projects publishing their data as LOD

* discussion of relevant tools and live applications based on LOD

* digital libraries and their content in relation to Ancient World objects

* other approaches of making data interoperable and interlinked

 References

[1] http://pelagios-project.blogspot.com/

[2] http://spqr.cerch.kcl.ac.uk/

[3] http://collection.britishmuseum.org/About

[4] http://www.cidoc-crm.org/URIs_and_Linked_Open_Data.html

PKP 2011 Hackfest

Today there was the kick-off of the hackfest at the PKP 2011 conference. Not many people turned up, but I had the chance to spend some quality (coding) time with PKP developers and to have a sort of personal code sprint  on a side project, that is developing a plugin to integrate a Named Entity Recognition (NER) web service into an OJS installation (see here and there for a more theoretical background).

At the end of the day what I got done was:

  • setup a local instance of OJS (version 2.3.6) using MAMB;
  • give a quick try to the OJS Voyeur plugin, which unfortunately for me is working only with version <=2.2.x;
  • create the bare-bone of the plugin, whose code is up here (for my personal record rather than for other’s use, at least at this early stage);
  • write a PHP class to query a web service (that I’m developing) to extract citations of ancient works from (plain) texts;
  • come up with two possible scenarios for further implementation of the plugin, to happen possibly earlier than next year’s PKP hackfest 😉
The idea of this post, indeed, is to comment a little on these two possible scenarios.

1. Client-side centric

The first scenario looks rather heavy on the client-side. The plugin is packaged as an OJS plugin and what it does is essentially as follows:

  1. after an article is loaded for view, a javascript (grab.js) gets all the <p> elements of the HTML article and send them over ajax to a php page (proxy.php);
  2. a php class act as a proxy (or client) for a 3rd party NER web service;
  3. the data that are received from via the ajax call are passed on to the web service via XML-RPC;
  4. the response is returned by the web service as JSON or XML format…
  5. … and then processed again by the JS script, ideally using a compiled template based on jquery’s template capability. Finally, the citations that were extracted are display as a summary box alongside the article.

2. Server-side centric

Instead, in the second scenario that I envisaged most of the processing happens on the server-side.

  1. before being displayed, the article is processed to extract <p> elements;
  2. the main plugin class (plugin.php) takes care of sending the input to and receiving a response from the NER service;
  3. the response is then ran through a template (template.tpl) by exploiting OJS’s templating functionalities;
  4. the formatted summary box is injected into the HTML which is now ready to be displayed to the user.

All in all, I think that I came up with (1) mainly because my PHP is rather rusty at the moment ;). Therefore, although I’m quite reluctant to admit so, I might decide to go for (2). However, a good point to opt for the former is the case where the user can decide for each paper whether to enable this feature or not.

Idea for an OJS plugin

I have been meaning for quite a while to find some time to code a plugin for the Open Journal System (OJS) platform. Unfortunately it didn’t happen yet. However, the good news is that the chance somehow came to me, since this year’s PKP conference will be held in few weeks days in Berlin, that is where I recently moved and now live.  And at the same time as the PKP conference there will be a PKP hackfest where I hope to have the chance to push forward my idea for an OJS plugin and eventually get some coding done.

The idea it’s quite simple, but my knowledge of OJS’ is not (yet) such to allow me to have a clear idea of how to implement it. The plugin should enable the detection and markup of certain bits and pieces (read “named entities”) of articles from an OJS installation. Although my application of the plugin is (originally) targeting a specific type of named entities, citations to ancient texts, to be found mainly in Classics journals, it’d possible to generalise the idea for a wider application. Indeed, the plugin could be thought of as applicable to any named entities contained in journal articles, provided that a web service for that is available.

As an example, let’s suppose to have an existing installation of OJS, where an article contains the following paragraph (which is actually taken from a real world article appeared in Greek, Roman, and Byzantine Studies):

Thus, in the paragraphê speeches ([Dem.] 37.58–60, 38.21–22), a binding settlement is sometimes described as a “boundary marker (horos); in an inheritance dispute (40.39), the binding decision is a telos or peras.

The text in bold contains two references to Demosthenes’ works, respectively (1) a reference to lines 58-60 of the speech Against Pantaenetus and (2) another to lines 21-22 of the one  Against Nausimachus and Xenopeithes. The plugin would parse each paragraph and then produce a result somewhat similar to this, where the cited texts are displayed alongside the text article. All in all the whole idea is not much different from OJS’s citation markup assistant, although at the same time it can be generalised to cover other kind of named entities (people, organisations, etc.).

Some aspects that I believe are important for the implementation of such a plugin are:

  • client/server architecture: the plugin should act as a client with respect to the Named Entity Recognition web service; I have already a working prototype for a web service (based on XML-RPC) performing the extraction of citations as described above;
  • the markup of the extracted named entities should be customisable, ideally based on a template rewrite system, and should allow one to output RDFa or microformatted markup.
  • being able to review, correct and therefore store the output of the automatic extraction will be a plus (possibly including interaction with authority lists to which the named entities can be linked to).

So, this is the idea in a nutshell. I’m looking forward to discuss it together with interested OJSers next week in Berlin and I hope there will be a follow-up post with some updates on the hackfest’s outcome.

XML and NLP: like Oil and Water? [pt. 2]

This is the second part of a series (see pt. 1) about XML and NLP, and specifically about how it’s not really handy to go back and forth from the former to the latter. Let me sum up what I was trying to do with my XML (SGML actually) file: I wanted to 1) process the content of some elements using a Named Entity Recognition tool and then 2) be able to reincorporate the extracted named entities as XML markup into the starting file. It sounds trivial, doesn’t it?

But why is this actually not that straightforward and worth writing about it? Essentially because to do so we need to go from a hierarchical  format (XML) to a kind of flat one (BIO or IOB). During this transition all the information about the XML tree is irremediably lost unless we do something to avoid it. And once it’s lost there is no way to inject it back into the XML.

I am aware of the existence of SAX, of course. However SAX is not that suitable for my purpose since it allows me to keep track of of the position in the file being parsed just in terms of line and column number (see this and that). [I have to admit that at this point I did not look into other existing XML parsers.] Instead I just wanted to access for each node or text element its start and end position in the original file. The solution I found it’s probably not the easiest one but at the same time it’s quite interesting (I mean, in terms of the additional knowledge I acquired while solving this technical problem). The solution was to use ANTLR (ANother Tool for Language Recognition).

An explanation of what is ANTLR and how does it work it’s out the scope of this post. But let’s put it simple: ANTLR is a language to generate parsers for domain specific languages (see also here). It is typically used to write parsers to process programming languages and it’s based on few core concepts: grammar, lexer, parser and Abstract Syntax Tree (AST). Therefore, it is possible to write an ad-hoc XML/SGML parser using this language. To be honest, the learning curve is pretty steep, but one of most rewarding things to ANTLR is that it’s possible to compile the same grammar into different languages (like Python and Java in my case) with just few (and not substantial) changes, with consequent great benefits in terms of code reusability.

The parser I came up with (source code here) is based on some other code that was developed by the ANTLR community. Essentially,I did some hacking on the original to allow for tokenising the text element on the fly while parsing the XML. During the parsing process, the text elements in the XML are tokenised by space characters and split into tokens of which the start and end positions are kept.

My ANTLR XML/SGML parser does on the fly another couple normalisations in order to produce an output that is ready to be consumed by a Named Entity Recogniser:

  1. resolving SGML entities into Unicode;
  2. transcoding BetaCode Greek into Unicode;
  3. tokenising text by using the non-breaking space (&nbsp;) in addition to normal spaces: this task in particular, although it may seem trivial, implies recalculating the position of the new token in the input file and it required a bit more thinking through;
The result of running the parser over an SGML file is a list of tokens. I decided to serialised the output into JSON, for the time being, and a snippet of the result looks pretty much like this:
[{"start": 2768, "end": 2778, "utext": "\u0153uvre", "otext": "&oelig;uvre"},
{"start": 2780, "end": 2782, "utext": "par", "otext": "par"},
{"start": 2784, "end": 2790, "utext": "Achille", "otext": "Achille"}]

Start and end indicate (not surprisingly) the byte position of the token within the file, whereas otext and utext contain respectively the original text and the text after the resolution of character entities.

To sum up, the main benefit of this approach is that, once named entities have been automatically identified within the text of an XML/SGML file (e.g. “Achille” in the example above), we can trasform this newly acquired NE annotation into XML markup and pipe it back into the original file.

“The World of Thucydides” at CAA 2011

I’m at Heathrow airport waiting to board on a flight to Beijing (via Amsterdam) where I’ll be attending the CAA 2011 conference. To get into the conference mood I though it may be a good idea to post the abstract of the paper that myself and my colleague Agnes Thomas (CoDArchLab, University of Cologne) are going to give within a session entitled Digging with words: e-text and e-archaeology. [This version is slightly longer than the one that we submitted and has been accepted.]

The World of Thucydides: from Texts to Artifacts and back

The work presented in this paper is related to the Hellespont project, an NEH-DFG founded project aimed at joining together the digital collections of Perseus and Arachne [1]. In this paper we present ongoing work aimed at devising a Virtual Research Environment (VRE) that allows scholars to access to both archaeological and textual information [2].

An environment integrating together these two heterogeneous kinds of information will be highly valuable for both archaeologists and philologists. Indeed, the former will have easier access to literary sources of the historical period an artifact belongs to, whereas the latter will have at hand iconographic or archaeological evidences related to a given text. Therefore, we explore the idea of a VRE combining archaeological and philological data with another kind of textual information, that is secondary sources and in particular journal articles. To develop new modes of opening up and combining those different kinds of sources, the project will focus on the so called Pentecontaetia of the Greek historian Thucydides (Th. 1,89-1,118).

As of now, we do not dispose (yet) of an automatic tool capable of capturing passages of Thucydides’ Pentecontaetia that are of importance to our knowledge of Athens and Greece during the Classical period. For the identification of such “links” we totally rely on the irreplaceable, manual and accurate work of scholars. For this reason some preliminary work has been done by A. Thomas to manually identify within the whole text of Thucydides’ Pentecontaetia entities representing categories in the archaeological and philological evidence (e.g. built spaces, topography, individual persons, populations). However, what instead can be done at some extent by means of an automatic tool is extracting and parsing both canonical and modern bibliographic references that express the citation network between ancient texts (i.e. primary sources) and modern publications about them (i.e. secondary sources).

As corpus of secondary sources the journal articles available in the JSTOR and made recently available to researchers via the Data for Research API [3] are being used. Apart from JSTOR classification of such articles into the separate categories of archaeology and philology, those articles are likely to contain references to common named entities that make them overlap at some extent. As an example of what we are aiming to, in Th. I 89 the author refers to the rebuilding of the Athenian city walls – after the Persian War in the beginning of the 5th century BC – as a result of the politics of the Athenian Themistocles. Within our VRE, the corresponding archaeological and philological metadata [4,5] will be presented to the user along with JSTOR articles from both archaeological and philological journals related to the contents of this text passage.

From a technical point of view, we are applying Named Entity Recognition techniques to JSTOR data accessed via the DfR API. References to primary sources, that are usually called “canonical references”, and bibliographic references to other modern publications are to be extracted and parsed from JSTOR articles and will be used to reconstruct the above mentioned citation networks [6,7]. Semantic wise, the CIDOC-CRM will provide us with a suitable conceptual model to express the semantics of complex annotations about texts, archaeological findings, physical entities and abstract concepts that scholars might want to create using such a VRE.

References

[1] The Hellespont Project, <http://www.dainst.org/index_04b6084e91a114c63430001c3253dc21_en.html>.

2] Judith Wusteman, “Virtual Research Environments: What Is the Librarian’s Role?,” Journal of Librarianship and Information Science 40, no. 2 (n.d.): 67-70.
[3] John Burns et al., “JSTOR – Data for Research,” in Research and Advanced Technology for Digital Libraries, ed. Maristella Agosti et al., vol. 5714, Lecture Notes in Computer Science (Springer Berlin / Heidelberg, 2009), 416-419 http://dx.doi.org/10.1007/978-3-642-04346-8_48.

[4] Themistokleische Mauer, http://arachne.uni-koeln.de/item/topographie/8002430

[5] http://www.perseus.tufts.edu/hopper/text?doc=Thuc.+1.89&fromdoc=Perseus:text:1999.01.01999

[6] Matteo Romanello, Federico Boschetti, and Gregory Crane, “Citations in the Digital Library of Classics: Extracting Canonical References by Using Conditional Random Fields,” in Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries (Suntec City, Singapore: Association for Computational Linguistics, 2009), 80–87, http://portal.acm.org/ft_gateway.cfm?id=1699763&type=pdf.

[7] C Lee Giles Isaac Councill and Min-Yen Kan, “ParsCit: an Open-source CRF Reference String Parsing Package,” in Proceedings of the Sixth International Language Resources and Evaluation (LREC’08) (Marrakech, Morocco: European Language Resources Association (ELRA), 2008), http://www.comp.nus.edu.sg/~kanmy/papers/lrec08b.pdf.

Feet on the ground, DB on the cloud

This quick post is just to say how much the UK NGS did save my day today, and probably even a lot more.

For my research project I’m digging into the JSTOR archive via the Data for Research API. And I realised soon to what extent scalability matters when trying to process all the data contained in JSTOR related to scholarly papers in Classics. There are ~60k of them.

The workflow I decided to go for basically consists in retrieving the data from JSTOR, making them persistent via Django (+ MySQL database backend) and then processing iteratively the data. The automatic annotation about those data (mainly Named Entity Recognition) that I’ll be producing is to be stored in the same Django DB.

After having ran the first batch to load my data into my Django application the situation was as follows: 7k documents processed and DB size of ~600MB. By the end of my data loading process the DB will grow up to approximately 6GB (just the data, without any annotation). And it’s at this stage that the cloud (or the grid) comes in handy.

I run my process locally but the remote DB is somewhere on the NGS grid (in my case it’s on the Manchester node). This is of great relieve to my and my machine of course in terms of disk space, speed in accessing the DB and system load. Whenever I need I can dump the DB and installing it locally in case I find myself in the need of accessing it and without an internet connection. Not to mention the fact that the batch processed to load the data could be ran from the grid. Finally, to give public access to the data I’m using  the same django application that pulls out the data from the remote MySQL db.

Having free access to the national grid as UK researcher is absolutely essential, also for someone – like me – who does not work in one of those fields that are known to be benefitting most from grid infrastructure. Even if digital I’m nevertheless still a humanist.