DR2-INTERVIEWS: Interview with Peter De Bolla

With this interview we open the series of the “DR2-Interviews”, a new section of this blog dedicated to questions and answers about the use of quantitative methods.

A few months ago one of our members, Paolo Babbiotti, was in Cambridge as a Visiting Student and interviewed Peter De Bolla, Professor of Cultural History and Aesthetics and Director of the Cambridge Concept Lab: https://www.english.cam.ac.uk/people/Peter.De_Bolla/

Here you can read the entire interview and we thank Professor De Bolla for the answers to our questions.

  1. How did you come to use quantitative methods in the history of ideas?

I began to think about reading at scale as I researched the origins of the idea of human rights in the eighteenth century. That research was eventually published in my book The Architecture of Concepts: The Historical Formation of Human Rights (2013). For the most part I worked manually, compiling data from Eighteenth Century Collections Online. At that stage I did non have access to the underlying data and could not work with the dataset in an automatic fashion, essentially writing code to pose analytic questions to the data. This changed when I formed the Cambridge Concept Lab and the University managed to obtain a licence to write code to the dataset. The Lab worked for four years and developed a computational method for enquiring into the history of concepts and their structuration. A number of papers have been published outlining the methodology and presenting case studies. They can be found here: 

– ‘The Uses of Genre’, Ryan Healey, Ewan Jones, Paul Nulty, Gabriel Recchia, John Regan, Peter de Bolla in Representations, 149 (Winter 2020).

–   ‘The idea of Liberty, 1600-1800: a distributional concept analysis’, Peter de Bolla, Ewan Jones, Paul Nulty, Gabriel Recchia, John Regan , in Journal of the History of Ideas, 81.3 (July, 2020).

– ‘Distributional Concept Analysis: A Computational Model for Mapping the History of Concepts’, Peter de Bolla, Ewan Jones, Paul Nulty, Gabriel Recchia, John Regan, in Contributions to the History of Concepts, Vol 14, Issue 1, Summer 2019.

– ‘The conceptual foundations of the modern idea of government in the British eighteenth century: A distributional concept analysis’, Peter de Bolla, Ewan Jones, Paul Nulty, Gabriel Recchia, John Regan, in International Journal for History, Culture, and Modernity, 7 (2019). https://www.history-culture-modernity.org/collections/special/digital-history/

  1. What is the additional contribution of quantitative methods with respect to traditional ones?

Quantitative methods allow one to read at scale. This should complement and not replace the standard ‘close reading’ methods that have long been established in the history of ideas. The great benefit of these new, digital approaches is that we can trace the development and transmission of ideas across a culture at large. Traditional methods assume that ideas are ‘held’ by persons, which of course they are, and that one person transmits ideas to other persons. This model essentially creates a ‘great men’ theory or map of ideas by which, say, Hume read Locke and in turn was read by Burke and so forth. This model captures some of the history in and of ideas, but it fails to register how ideas are more generally dispersed within a culture, passed across locations – books, pamphlets, newspapers – and persons who are usually understood to be minor or even irrelevant participants in the history of ideas. Now we can match or combine the two scales of reading, the human and the quantitative or computational, and discern a bigger and more accurate picture of the formation and transmission of ideas.

  1. In your view, what is and what should be the relation between digital methods and the humanities?

Digital methods must be created form the ground up, developed in tandem with a set of questions that are appropriate for the type of enquiry being undertaken and the datasets that are used. A one size fits all approach is likely to be too insensitive to be of great use in humanities research. This means that researchers should either be able to work with digital or computational techniques or construct collaborations which utilise different types of expertise. Many statistical and computational humanities projects often proceed on the basis of ‘because we can’. Which is to say that, for example, a literary or philosophical corpus is interrogated by developing data on, say, frequencies of word use, or patterns of lexical coassociation. This data might then be exported into a software package for visualisation and, for example, a word cloud is produced. Why do projects carry out this kind of research and presentation of findings? Often it is ‘because we can’, because the software packages allow one to do this. In many of these cases the first and most important question has not been asked or framed in a useful way. One must first formulate what one wants to know, or to find out, before then constructing a research model that might use techniques that are not designed by the project, or in other cases that are bespoke creations for the task at hand. If these protocols are followed digital methods will break new ground in the humanities and complement the traditions of research that have built up over long stretches of time. The issue here is, once again, one of scale.  Computers cannot read, but they can do many things that humans are not so good at, that is discern patterns in very large datasets which can help us understand humanistic inquiry in new and potentially transformative ways.

This entry was posted in Data-Driven Research, Digital Humanities, DR2, History of ideas, Interviews, Methodology, Quantitative methods. Bookmark the permalink.

Leave a Reply

Your e-mail address will not be published. Required fields are marked *.

This site uses Akismet to reduce spam. Learn how your comment data is processed.