We need Digital Texts — Data that can be processed: About NLP for Historical Texts

Historical texts are more often available in digital form. Digitization should preserve cultural heritage and make documents accessible. Such projects increasingly strive to create digital text—data, that can be searched and processed. Together with the increasing availability of digital historical texts, there is a growing interest in applying natural language processing (NLP) methods and tools to them. Specific linguistic properties of historical texts—the lack of standardized orthography in particular—pose special challenges. Michael Piotrowski’s book gives an introduction to NLP and deals with such problems for historical texts.

The emerging field of practice in the Digital Humanities wants to use digital data for research in the humanities. It combines traditional qualitative methods with quantitative methods and tools, such as information retrieval, text analytics, data mining and visualization. The aim is to build up digital infrastructures for the humanities.

However, computer processing of historical texts is not a completely new application, as can be seen in computational linguistics, for instance. But there is a difference: Humanities computing has often been concerned with aspects of texts that focuse on interpretation, whereas computational linguistics generally works with dataficated texts. Now, the Digital Humanities mark a paradigm shift: Quantitative methods are regarded as being on par with qualitative methods, Piotrowski believes.

Computational linguists have discovered historical texts and their specific problems to work on. One of them is the training of tools for the use of modern languages. They care about spelling, correction and other such issues, but forget to take meaning shifts into account, Piotrowski says. Anyway: the developments indicate the beginning of a partial convergence of humanities computing and computational linguistics. To illustrate this, Piotrowski cites Federico Boschetti (Improving OCR accuracy for classical critical editions), Iris Hendrickx (Automatic pragmatic text segmentation) and Eva Pettersson and Joakim Nivre (Automatic verb extraction from historical Swedish texts), for instance.

What these examples have in common is that the NLP and humanities portions are more equal. NLP offers methods and tools for working with large amounts of texts: Digitization and methods for checking, correcting and processing texts. In other words: if the humanities seriously want to base their research on large quantities of text and apply quantitative methods, they will need NLP as a basis for all higher-level analyses, Piotrowski says.

On the other hand, NLP benefits from the humanities, too. Since most of the NLP work up to now has been done on newspaper and similar texts, what NLP currently lacks is a conceptual model of spelling variation, genre differences, and language change. Until now, there is no computational model that describes how synchronic and diachronic variants relate to each other and—possibly—leads to some kind of prototype that represents the relatedness of the variants. NLP tools must be adapted to different domains and genres.

According to his concept, Piotrowski discusses the digitization, encoding and annotation of historical texts and, specially, the handling of spelling variations. He presents us different projects and publications to this issues. Actually, there isn’t a standard, but a large variety of approaches. While computational linguistics established sets of methods and techniques that help to get a common understanding, as, for instance, in machine translation, this is not so in NLP: There is no universal approach for spelling normalization, for instance. In his overview, Pietrowski helps us to compare such approaches and techniques.

He is certain that NLP will play a larger role in the humanities, especially in the field of processing digital historical texts and cultural heritage. He sees three challenges in this: (1) Protagonists of NLP must get a better understanding of language variations and develop appropriate models for handling it; (2) Tools that are able to process marked-up texts must be developed; and (3) NLP and Digital Humanities must be brought together in order to use the potential of NLP for historical languages.

Bild Michael Piotrowski NLPSo, Piotrowski is convinced that NLP is a basic element for the development of the Digital Humanities. For that, he pleads for a collaboration between computational linguistics and the humanities. Scholars must understand Algorithms and Data structures. Indeed: We need digitization projects that create digital texts — data, that can be searched and processed. Michael Piotrowski’s book gives a good introduction to NLP and the problems processing historical texts.

Michael Piotrowski, Natural Language Processing for Historical Texts, 2012 Morgan & Claypool Publishers) / http://nlphist.hypotheses.org/author/mxpi

Guido Koller

Senior Historian, Swiss Federal Archives, CH-3003 Berne, Switzerland

More Posts - Website

Follow Me:
Twitter


OpenEdition suggests that you cite this post as follows:
Guido Koller (April 24, 2014). We need Digital Texts — Data that can be processed: About NLP for Historical Texts. We think History. Retrieved October 6, 2024 from https://doi.org/10.58079/vaea


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.