sexta-feira, 13 de junho de 2014

Doing History in the Digital World

At last the results of the 2011's workshop were published. With a general introduction written by me, and six great articles from participants in the workshop. Many thanks to all!

quarta-feira, 23 de outubro de 2013

Networks over space and time (abstracts)

Abstracts

Pau de Soto Cañamares (Instituto de Arqueología de Mérida - Spain): Costs & times of the Roman transport. Using network analysis to understand the Roman transportation system
Several methodological approaches are used to these days that also suggest how the Roman transport worked. This project is based on the operability of the Roman infrastructures as an indispensable way to know the benefits and shortcomings of the transportation system created in Roman times. A thorough analysis of each distribution models set (both temporary and costs) provides valuable information for understanding the mechanisms of the Roman economy and society. It is therefore obvious that the combination of all of the approaches (archaeological material, ancient sources, network simulation...) should allow us to obtain a more global perspective of the Roman economy, especially in matters of movement of goods.
The main geographical focus of this project is the NE of Hispania, but with the aim of use these methodology in a much broader geographic frame, the entire Iberian Peninsula, Italy and Britain were analyzed.
As would be seen during the presentation of this work, the knowledge of infrastructures is essential to obtain a more accurate knowledge of the freight transportation. This project has taken into account while analyzing the whole infrastructure of Roman roads which existed in Roman times, whether through land environment, river or sea. A set of constant values have been used to calculate the costs and transportation time needed for commerce. So, this model offers a simulation of possible costs and times needed to transport certain goods that had to be spent to travel from a particular spot of territory to another (and even the entire network).
Finally, the ability to see graphically and quantified those costs and time values which until now they could only be guessed, can open new perspectives and justifications to the speeches made on the work done until today. In fact, the comparison between these results and the analysis of archaeological and historical interpretations should not invalidate the final information but in many cases they should complement each other, clarifying and offering more elements for a global vision.


Thomas Thevenin (University of Burgundy - France), Robert S. Schwartz (Mount Holoyke College - USA) and Christophe Mimeur (University of Burgundy - France): Measuring the link between space and network over time
The railway growth seems to be essential to the economic dynamism of the north in France from 1830 to 1930. However, could we underline the same statement for the south of the country and for general consideration for the rural areas? On one hand, the transportation economy the network effect is essential to develop economy, agriculture and demography. Many governments’ policies are based on this mythic belief to justify the construction of important infrastructures. On the other hand, many others authors as historians and geographers criticized this position on the “network effect” (Pumain 1982). These works are usually based on aggregate scale or are focused on urban areas. The database presented in this article could be used to work at different scales to consider rural and urban regions or agrarian or industrial sectors on a long period of time.  In this way, we need to explore the explicative power of econometric solutions. We will present the first encouraging results based on GWR indicators (Fotheringham, Brunsdon, and Charlton 2009). This measure will be essential to pass from a descriptive approach to an explicative scientific strategy. 

Albertina Ferreira (Instituto Politécnico de Santarém - Portugal), Carlos Caldeira (Universidade de Évora - Portugal) and Fernanda Olival (Universidade de Évora - Portugal): From low density networks to geo-temporal approach
This study is based in 117000 prosopographic registers available in the SPARES database (Prosophographic System of Social Relations and Events Analysis). This database has being developed by the research project “Intermediate groups in the Portuguese dominions: the ‘familiares’ of the Holy Office (c. 1570 -1773)”, at the University of Évora. The database collects information regarding biographic and relational events, from the sixteenth to the eighteenth century. All the data is geo referenced. When producing historical maps about the location of familiares and comissários of Inquisition (1575-1775), the research teams has realized the existence of large areas of low density distribution of these characters’ networks. This study aims the creation of an analytic geo-temporal model which would allow historians to study these areas of low density distribution, in a comparative way. Departing from the dynamic networks analysis approach, this methodology tries to adapt it to the elaboration of new research parameters. In this sense, the team tried to coordinate the database with geographical information system software, the ArcGIS. Even if these trials will be constructed from the Inquisition historical data, these model can be applied to the other research themes involving time/space/networks.

Martin Stark, (University of Hamburg - Germany): Locating historical networks in time and space: current achievements and challenges
Given the nature of their sources, network analysis approaches in history often have a primary focus on social interactions. Well-known examples are letter exchanges between scholars, traders, covert resistance activists, credit markets, career advances or migrations. Many of the above mentioned network studies have however strong spatial components as well, which directly affect the creation of social ties, their maintenance and nature. Medieval trading for example depended largely on the capacity or failure to cover geographical distances, while the speed and intensity of scholarly exchanges depended on the reliability and speed of postal systems. Research in rural credit markets has for example revealed strong cross-border ties between 19th century Germany, France and Luxembourg. At this stage it seems however that by and large the spatial dimension acts as a background against which historical social interactions are being studied. I will present a selection of case studies and their integration and exploration of the spatial dimension.
Historical sources not only allow us to reconstruct social interaction in detail but also offer clues with regard to temporal dimension in which they occurred: Serial sources such as church registers, trading contracts and letters are often very easy to date and relate to each other in time. Network analyses which are based on the hermeneutic analysis of texts and other objects typically need to deal with heterogeneous data: Some ties can be dated precisely to an hour, whereas in other cases scholars need to infer time stamps based on the context of other events or simply can not make any such statements at all. I will discuss the challenges posed by missing data and data collection methods as well as the challenges inherent in exploring temporal data using different visualisation techniques, some generic, some tailored to the needs of specific research questions.

Tim Evans (Imperial College London - UK): Spatial Network Models in Archaeology
I will look at the spatial network models that have been used in archaeology. I aim to show to what extent they are all part of large families of models which will highlight the similarities and the differences. I will then ask if one model is better than another and how we might answer that question. I will also look at the sort of questions that can be answered with such models.

Joaquim de Carvalho (Universidade de Coimbra - Portugal): Networks, self-organisation and historical research: uncovering hidden structures in historical data
The theme of self-organisation and the emergence of complex structures has been object of intense interdisciplinary interest since the beginning of the century. Historical research has been somewhat distant from these new approaches, certainly because of methodological and empirical difficulties in finding opportunities of applying such concepts to concrete historiographical problems supported by historical sources. We will demonstrate that it is possible to detect historical processes in which there is strong evidence of mechanisms of self-organization at work. We also show how common sources contain precious information that can be made visible by applying special network analysis tools. We will focus on two examples: the choice of godfathers as recorded in parish registers and the circulation of mail in the 18th century. The main conclusion from our examples is that historians should bring into their conceptual and methodological tools the findings of Complexity Science, namely the concepts of Emergence and Self-Organization, and the techniques of network reconstitution and analysis. By incorporating tools and concepts such as these new insights can be gained into the fundamental questions of the persistence of structures and the interaction of structures and individual agency.

Clement Levallois (Erasmus University Rotterdam - Netherlands): Visualization of large and time-dependent networks: advances and limits
Network visualizations are helpful devices for the exploratory analysis of a dataset and are increasingly accepted as legitimate formats for the visual display of an argument in the social sciences and the humanities.
I will report on recent advances in software development (evolutions of the Gephi platform) which widen the scope of these visualizations: the acceptable size of datasets becomes larger, and datasets of such a large size and with a time dimension can be represented.
Experimenting with these new possibilities opens the question of the meaning of the visualization thus performed. Based on the visualization of a large dataset of transactional data, I will discuss how (still young!) conventions for the meanings attached to the visualization of dynamic networks are challenged by the scale and transactional nature of the dataset.
These advances are themselves anything but stabilized results, and the conclusion will discuss questions that are opened in the representation of large, time-dependent networks.

Sofia Oliveira, Jared Hawkey and Nuno Correia (CADA and Universidade Nova de Lisboa - Portugal): Finding and Representing Personal Time/Space Patterns
The talk describes the work carried out in a project, Time Machine, that aims to stimulate reflection about personal routine while engaging in a dialog regarding the daily uses of ubiquitous computing, and a more broad discussion regarding the methods and relations between art and science. TimeMachine was proposed as a collaborative project between CADA, a Lisbon-based art group that creates playful-experimental software mainly using mobile technologies, and the Interactive Multimedia Group of CITI/FCT/UNL, that works on different aspects of describing, processing and interacting with multimedia information. One of the main outcomes of TimeMachine is a mobile application that captures and processes location data and creates personal and intimate time and space maps that capture routine and activity. Visual representations exploit color, shape and proximity to show the network of meaningful places and how they are organized temporally. The visual representations rely on  a carefully designed and rigorous processing framework that enables concrete representations of time and space but also supports the development of subtle and ambiguous representation forms. The work was developed in an iterative process where multiple processing and visualization prototypes were developed, tested and subject to critical reviews. The talk will discuss the different methods that were employed to develop the project, the results obtained so far and open issues for further research. Particular attention is dedicated to the tension created by the different goals that the project had considering its desired artistic and scientific outcomes. http://img.di.fct.unl.pt http://cada1.net

quarta-feira, 9 de outubro de 2013

Networks over space and time: modelling, analyzing, and representing complex data in the digital humanities


November 8th 2013, FCSH, New University of Lisbon, Lisbon, Portugal

WG1, Spatial and temporal modelling: representation of space and time, NeDiMAH

(Auditorium 2, Tower B, 3rd floor)

The workshop is free. All those who wish to attend please register here:

More info (programme - travel grants)

This workshop is about interconnections between, and in space and time. But it also sees interconnections at other levels: between modelling and analysing, between theory and practice, as well as between humanities and computing.

In the humanities, a close look at networks and relationships, whether formal or informal, personal or social, of information or of knowledge, of transportation or of communication, has always been an important subject of study and, at the same time, a powerful analytical process. In computer science, the study of networks and of methodologies for analysis and visualization of these relationships is nowadays an increasingly well understood and practiced area of knowledge. In both the humanities and computer science, researchers are well aware of the dynamic nature of data and knowledge when viewed through the lenses of space and time.

Networks can be studied in a purely spatial perspective, if the object of analysis is the distance between things or people. However, there are two other dimensions which render networks’ study in a more complex and richer methodology. Either time or social relationships help to extend the focus of analysis from distance to connectivity, and this is an important concept for the Humanities, as it is for the Social Sciences, at least, since the 1930’s . In the field of spatial analysis, the focus has also tended to shift from an almost exclusively quantitative approach , to one that tries to develop a new ontological and epistemological view, combining quantitative with qualitative methods and sources , a view also important for the humanists. When put together, time, spatial analysis, with its derivative, spatial network analysis, and social network analysis, can be a powerful way of thinking about the world (theory) and of explaining it (methodology). And at the present time, with the integration and plasticity of the digital, the rising awareness about geography and time trough the Internet’s social networks, and the growing usability of the Web 2.0, thinking and explaining networks can benefit from powerful tools, increasingly complex and accessible at the same time.

The aim of this workshop is to combine analytical perspectives in the study of networks, over space and time, in humanities disciplines and on various themes, to identify methodologies, discuss research results, and encourage interdisciplinary approaches. The main focus of this workshop will be the areas of modelling and representation, highlighting them more as methods of analysis and knowledge production than merely as tools.

The perceived outcomes of this workshop will be to document the case studies presented, discuss and share methods and research results arising from these and to identify in the form of a report the interconnection between the humanities and the digital, helping to define a taxonomy of new methodologies and the development of a community of researchers for future collaborative work.

quarta-feira, 15 de maio de 2013

A Day in the life of a Digital Humanist (Spanish/Portuguese version)

Convite à participação

A todos os "humanistas digitais" ou a todos aqueles que dirigem e/ou colaboram em projectos de humanidades com uma componente digital.
Juntem-se a nós para o primeiro Dia das Humanidades Digitais (versão ES / PT) que terá lugar no dia 10 de Junho de 2013.
Um Dia na Vida das Humanidades Digitais (Dia HD) é um projecto que pretende documentar um dia de trabalho de pessoas que estejam envolvidas em projectos que ligam as humanidades e a computação. Pretende-se reunir pessoas de todo o mundo que falem ou trabalhem primordialmente nos idiomas espanhol e português, para através de texto e imagem registar os eventos e actividades de um dia de trabalho. O objectivo do projecto é cruzar num único local os labores de todos os participantes, deste modo elaborando um recurso digital com o qual se possa responder à questão “O que é que os humanistas digitais efectivamente fazem?”

Para aceder à página do evento e registar-se: http://dhd2013.filos.unam.mx/pt-br/

O projecto realiza-se desde 2009 na sua versão inglesa e este ano acontece pela primeira vez em versão espanhola e portuguesa, sendo organizado pelas seguintes entidades:

quarta-feira, 16 de novembro de 2011

Abstracts - Digital Methods and Tools...

Program


John Bradley (King's College London), Silk Purses and Sow's Ears: In what ways can structured data deal with historical sources?


Joaquim de Carvalho (Universidade de Coimbra), Combining source oriented and person oriented data models in prosopographical database design


Paul Ell (Centre for Data Digitisation and Analysis, Queen's University Belfast), Humanities Geographical Information Systems: texts, images, maps
This paper discusses the developing use of Geographical Information System technology in the humanities. It examines early projects the Centre for Data Digitisation and Analysis (CDDA) were involved in, creating the first incarnation of humanities GIS projects termed ‘Historical GIS’ with a focus on census and other statistical data together with administrative unit boundaries. These projects resulted in the ability to produce choropleth maps of a range of socio-economic data and to compare these data over time.
Arguably historical GIS had a limited impact. Such systems are complex and costly to create and require specialised skills to use. Moreover the vast majority of humanities scholars are not interested in statistical data but in text. Work has been done to encompass a wider range of content into historical GIS including multimedia materials such as photographs, historical maps, gazetteers, travellers tales and more. Attempts have also been made to make these resources more accessible through the use of online interfaces rather than bespoke GIS software such as ArcGIS. An exemplar involving CDDA will be reviewed.
The challenge remains however, to put the humanity into humanities GIS. The involves far closer interaction between GIS and texts, and a greater focus not on producing maps but on using the spatial functionality of GIS to draw sources together by place. The paper discusses new work at CDDA, at Lancaster University and at UC Berkeley to make this happen and transform a niche methodology into a core element of humanities research practice.


Luís Espinha da Silveira (IHC, FCSH-Universidade Nova de Lisboa), GIS and Historical Research: promises, achievements and pitfalls
It is usually agreed that GIS begun to be introduced in historical research in the mid-1990s (Gregory & Ell, 2007, 15-16; Knowles, 2002, XI). The description of the atmosphere of the 1998 and 1999 Social Science History Association (SSHA) sessions on historical GIS refers people’s “passionate engagement with methodology” and participants’ excitement caused the sense they  were “making something new by using new tools” (Knowles, 2000, 5-6). As it had happened with quantitative history, historical GIS would be able to open up historical scholarship, inspire new creativity, challenge old assumptions, and promote the exploitation and understanding of new kinds of historical evidence. (Knowles, 2000, 17-18). Some years later Knowles (2008, 267), although acknowledging the great progresses that had been made, also recognized that GIS’s “promise, however, is far from fully realized”. Even so, it seems that the enthusiasm of the 1990s hasn’t vanished and a very interesting book, recently published, proposes to “advance an even more radical conception of GIS that will reorient, and perhaps revolutionize, humanities scholarship” (Bodenhamer, Corrigan & Harris, 2010, IX).
In this paper I will not question the need to develop the conceptual framework of the field of historical GIS, the importance of addressing the specific problems that historical information poses to GIS, the developments regarding textual information, the interest to explore new forms of visualization and the possibilities opened up by the Internet to create an infrastructure to support research. At the same time, I will emphasize the importance for historians who are aware of the role of space in historical explanation to continue to address relevant historical questions and to integrate GIS tools in historical research methodologies, combining quantitative and qualitative approaches, paying attention to space, time and scale. I will illustrate this point with some of examples taken from our research on Portuguese History.
Finally, I will call the attention to some pitfalls created by the use of GIS technologies: the costs in terms of time or money; the fascination by the technology that easily diverts you from research to data collection and dissemination and the lack of academic recognition of new forms of publication. I will exemplify the first issue with a supranational project on population in the Iberian Peninsula.


Malte Rehbein (Universität Würzburg), Text Encoding: a historian's perspective
“Before they can be studied with the aid of machines, texts must be encoded in a machine-readable form” (Michael Sperberg-McQueen) and encoding is understood as “the process by which information from a source is converted into symbols to be communicated” (wikipedia).
The focus of text encoding initiatives and /the/ Text Encoding Initiative (TEI), providing a standard for electronic texts in the humanities, has mostly been on two core disciplines, namely literary and linguistic studies and projects that used computers to study texts have quite often been based on digitization of printed material. But text encoding for historical research is about more than just text; it is about encoding (textual) culture and hence concerns many aspects of our work as historians (at least when dealing with primary sources).
This presentation illustrates how machine readable information can be created and encoded and what role this may play in order to support our understanding of the past. Encoding is not restricted to text. The various examples of mostly on-going historical research discussed here, spanning a range from early medieval biblical commentaries up to letters from the WWII front, will demonstrate that text encoding from a historian's perspective must be understood holistically, including not only text, but also text carriers, context, implicit and explicit content, inter- and paratexts, text production processes, and text usage.


Rita Marquilhas (Centro de Linguística - Universidade de Lisboa), The automatic research of digital editions
Databases with the transcription of historical sources do benefit from the adoption of an XML-TEI mark-up. On the one hand, such methodology allows for the electronic online edition of the text since XML is a machine-readable format and all browsers can process it. On the other hand, the adoption of the TEI conventions for the textual mark-up give the corresponding databases the design of a well tested standard within scholarly editing.
The just mentioned advantages – readability by computer programs and standardization according to scholarly criteria – have guaranteed that one such marked-up database has been recently subjected to further processing, allowing for several empirical advances in a multidisciplinary domain.
The database in question is a corpus of Portuguese private letters written between the 16th and the 20th century (projects CARDS and FLY of the Linguistics Centre of Lisbon University, CLUL). In September 2011, the database contained c. 2,400 letters. A quick presentation of the mark-up we use in those projects will take place.
The main experiments we have made on the corpus thus assembled were the following:
1.    Using information extraction procedures by means of Perl scripts expressly made by computer engineers, it was possible to isolate the textual parts of the letters that contained polite formal language and to account for the relevance of their semantics to politeness theory.
2.    Using lexical statistics software (both the branded WordSmith Tools and the freely available AntConc) it was possible to compare the keywords of the letters’ text with the ones in two larger reference Portuguese corpora. One contained oral utterances recorded in Dialectology campaigns (Cordial); the other contained texts of several genres written in Contemporary Portuguese (CRPC). The keyness of the letters corpus lexicon (i.e., the most relevant lexicon vis-à-vis the one in Cordial and in CRPC) revealed that the letter genre is indeed much closer to spoken styles than to written ones.
3.    Adapting for the Portuguese language a set of statistical tools of spelling normalization formerly designed for English (VARD2), we are managing to get some promising results in the attempt to make the automatic normalization of the palaeographic (highly variant) transcriptions we have.


Melissa Terras (University College London), Exploring the potential of Digital Humanities with the Transcribe Bentham project
There is much confusion around what Digital Humanities actually is, with many practitioners in the field debating the use of the term, and the activities which constitute Digital Humanities. The aim of this paper is to present a bottom-up approach to defining the scope and potential of Digital Humanities by presenting a single project, Transcribe Bentham.  By focussing on one individual project and the various ways in which technologies have allowed research that would otherwise have proved impossible, Transcribe Bentham allows us to understand the benefits and potentials in using digital techniques within the humanities.
Transcribe Bentham is a one year, Arts and Humanities Research Council funded project running from April 2010 until April 2011, housed under the auspices of the Bentham Project at UCL (http://www.ucl.ac.uk/Bentham-Project/). The Bentham Project aims to produce new editions of the scholarship of Jeremy Bentham (1748-1832), the English jurist, philosopher, and legal and social reformer. He is well known for his advocacy of utilitarianism and animal rights, but is perhaps most famous for his work on the “panopticon”: a type of prison in which wardens can observe (-opticon) all (pan-) prisoners without the incarcerated being able to tell whether or not they are being watched. Twenty volumes of Bentham’s correspondence have so far been published by the Bentham Project, plus various collections of his work on jurisprudence and legal matters. However, UCL Library Services holds 60,000 folios of Bentham’s manuscripts, and there is much more work to be done to make his writings more accessible, and to provide transcripts of the materials therein.
Transcribe Bentham has tested the feasibility of outsourcing the work of manuscript transcription to members of the public, aiming to digitise 12,500 Bentham folios, and, through a wiki-based interface, allowing transcribers access to images of unpublished manuscript images, in order to create a TEI-encoded transcript for checking by UCL experts. Approved transcripts have been stored and preserved, with the manuscript images, in UCL's public Digital Collections repository, making innovative use of traditional Library material.
The Transcribe Bentham project is now in its reporting phase, after six months of active promotion of the wiki based transcription tool. This paper will present an overview of the project, demonstrating how digitisation, online presence, text encoding, transcription, crowdsourcing, and online outreach can benefit those working with humanities data.


Daniel Gomes (Portuguese Web Archive – Fundação para a Computação Científica Nacional), Web Archiving
The web is the primary mean of communication in developed societies. All kinds of information that describe our recent times are published on the web. As everyone can publish online, it becomes possible to analyze events through various first-person points of view that provide different perspectives, and not just the official descriptions issued by dominant forces. Thus, the web is a valuable resource for contemporary historical research. However, its information is extremely ephemeral. Several research studies have shown that only a small amount of information remains available on the web for longer than 1 year.
Web archiving aims to acquire, preserve and provide access to historical information published on the web. In November 2011, there were 52 web archiving initiatives worldwide, which include services that enable any person to create their own historical collections. Web archiving has also an important sociological impact because common citizens are publishing information with a personal meaning on the web, without having any preservation concerns, such as saving their pictures on a disk. In the future, web archives will be the only source of personal memories to many people.
There are tools, such as browser add-ons, that facilitate historical research over web archives. However, most of them require that the users know the exact address (URL) where the needed information was published in the past. The Portuguese Web Archive provides a full-text search service over 1 billion contents archived from 1996 to 2011 (available at www.archive.pt), as well as other tools for historical research over archived web collections.


Peter Doorn (Data Archiving and Networked Services, Nederland), Computational history among e-science, digital humanities and research infrastructures: accomplishments and challenges
This presentation will focus on the following subjects: first I will briefly introduce DANS; after that I will place the developments in computational history in the context of the developments in e-Science and the digital humanities. Over the years we see a gradual increase in the scale of projects, partly brought about by computation itself and the specialization it requires. Therefore we can see an increased attention for digital data and research infrastructures, both at the national and at the European level.
About DANS:
DANS is an institute of the Royal Netherlands Academy of Arts and Sciences (KNAW) and the Dutch Research Funding Organisation (NWO) and was founded in 2005 (www.dans.knaw.nl). It builds on the work of predecessors, the first of which dates back to 1964 (Steinmetz Foundation and Archive for the social sciences). The Netherlands Historical Data Archive (NHDA) was created in 1989, inspired by the needs of historians and the creation of numerous historical databases, which needed to be archived and kept accessible for later use. The central task of DANS is to provide permanent access to digital data in the humanities and social sciences, although we recently started to gradually expand our services to other domains as well.
DANS maintains a digital archive with substantial data collections in history, social sciences, and archaeology. We also carry out data projects in collaboration with research communities and partner organizations. Moreover, we give advice and support, for example we developed a Data Seal of Approval (see: http://www.datasealofapproval.org/), aiming at quality control of data and repositories, and maintain a Persistent Identifier Infrastructure based on the URN (see: http://www.persid.org/index.html).
In short, DANS promotes permanent access to digital research data; it encourages scientific researchers to archive and reuse data by means of our online archiving system EASY; we provide access, through www.narcis.nl, to thousands of scientific datasets, e-publications and other research information in the Netherlands; moreover, DANS provides training and advice, and we perform research into archiving of and access to digital information.
History and computing as e-Science:
It makes sense to place the developments of computational history in the past decade in the context of e-science, which has been defined back in 2001 as “Science increasingly done through distributed global collaborations enabled by the Internet, using very large data collections, tera-scale computing resources and high performance visualisation.“ (UK Department of Trade and Industry; Research Council e-Science Core Programme). Jim Grey, Tony Hey and others spoke of a “fourth paradigm” in science, characterized by a high data intensiveness. Increasingly, scientific breakthroughs will be powered by advanced computing capabilities that help researchers manipulate and explore massive datasets. The speed at which any given scientific discipline advances will depend on how well its researchers collaborate with one another, and with technologists, in areas of e-Science such as databases, workflow management, visualization, and cloud-computing technologies.
Although the scale of humanities research, including the work of historians, is much smaller than that in astronomy or particle physics, most specialists agree that the tendencies and needs of e-science and e-humanities are basically similar. Humanities computing was defined by Willard McCarty in 1999 as “an academic field concerned with the application of computing tools to arts and humanities data or to their use in the creation of these data.” Terms such as computational humanities, digital humanities and e-humanities are now also in use, and essentially denote similar things (with nuance I do not intend to get into).
Since the 1990s many people have come up with definitions for or descriptions of computing in historical research (this list can be easily expanded):
•    Charles Harvey: historical computing must be concerned with the creation of models of the past or representations of past realities.
•    Matthew Woollard: History and computing is not only about historical research, but also about historical resource creation.
•    George Welling: Historical Informatics (computational history) is a new field of interdisciplinary specialization dealing with pragmatic and conceptual issues related to the use of information and communication technologies in the teaching, research and public communication of history.
•    Lawrence McCrank (2002): Historical information science integrates equally the subject matter of a historical field of investigation, quantified social science and linguistic research methodologies, computer science and technology, and information science, which is focused on historical information sources, structures, and communications.”
•    Boonstra, Breure, Doorn (2004): Historical information science is the discipline that deals with specific information problems in historical research and in the sources that are used for historical research, and tries to solve these information problems in a generic way with the help of computing tools
In a study on the “Past, Present and Future of Historical Information Science” I published together with Onno Boonstra and Leen Breure, we distinguished four categories of information problems in historical research, which we ordered on what we called the “life cycle of historical information”: information problems of historical sources (representation); of relationships between sources (harmonization, linkage); of historical analysis (qualitative and quantitative); of the presentation of sources or analysis (visualization, edition). The PDF of the book can be found here: http://www.dans.knaw.nl/content/categorieen/publicaties/past-present-and-future-historical-information-science.
Back in 2004, we were a bit wary on the developments of history and computing in the past few years. It seemed as if the exciting and formative years of historical computing (roughly the period 1985-2000) year were over. Many main-stream historians were just happy to be able to use the computer for text processing, web browsing and emailing.
Probably a degree of specialisation did occur: you simply could not expect every historian to be a programmer, as Le Roy Ladurie once said. The scale of historical research had to go up to get beyond the basic level of computing techniques. Collaboration with professional IT specialists was necessary, and I think we are gradually working towards that direction.
The increase of the scale of digital history projects:
In my presentation I will mention a few examples of big projects we were involved in, and in which computing scientists and historians did work together: the digitization of the Dutch censuses and the project “Life Courses in Context” (the first project in the humanities in the Netherlands to receive an investment grant of a few million Euros; see www.volkstellingen.nl); the project “Climate of the World Oceans”, in which historians, computing scientists and climatologists worked together to retrieve weather observations from historical ships’ logs (www.knmi.nl/cliwoc/); the collaboratory on institutions for collective action (http://www.collective-action.info/); the collaboratory ‘Clio Infrastructure’, building and connecting global data hubs on world inequality, the increasing divergence between rich and poor countries (www.clio-infra.eu). The projects “Telling witnesses” and “Veteran tapes”, in which many hundreds of qualitative interviews have been collected and analysed as “oral histories” of the Second World War and other conflicts (http://getuigenverhalen.nl/) en (http://www.watveteranenvertellen.nl).  The project Medieval Memoria Online (MeMO), which aims to help scholars in carrying out research into memoria during the period up to the Reformation (c. 1580) in the area that is the present-day country of the Netherlands (http://memo.hum.uu.nl/). In all these projects, historical researchers and computing experts (and often specialists from other disciplines as well) from several institutes worked or are working together.
The need for research infrastructures:
It is vital that these projects rest on a solid foundation, not only during the course of the project, but also afterwards. If no infrastructure exists that can guarantee the sustainability after the project is finished, the results are in danger of disappearing soon after the projects’ end, and the investment and effort will get lost. This is exactly why digital infrastructures are necessary: to support and maintain the collaborative efforts. The services developed in the projects need to be sustainable, and they can only be maintained efficiently if they are generic and re-usable. This is why a few years ago, not just in the natural and life sciences, but also in the humanities and social sciences, initiatives have been taken to set up infrastructures to support and sustain the investments done in large (and small) projects. The European Strategy Forum for Research Infrastructures (ESFRI) formulated a first “Roadmap” for the creation of such infrastructures (http://ec.europa.eu/research/infrastructures/index_en.cfm?pg=esfri). DARIAH, the emerging Digital Research Infrastructure for the Arts and Humanities, is one of the two infrastructures proposed on the ESFRI Roadmap for the humanities, including history (www.dariah.eu). DARIAH aims to “link and provide access to distributed digital source materials of many kinds”. In the field of linguistics, CLARIN has been set up: Common Language Resources and Technology Infrastructure (www.clarin.eu), and there are also examples in the social sciences.  In several countries, among which the Netherlands, it is proposed that CLARIN and DARIAH will closely work together or even merge.
The digitization of cultural heritage material, among which archival sources, is of great importance for historians and other humanities researchers. And also in this field we see the creation of large-scale infrastructures. Europeana enables people to explore the digital resources of Europe's museums, libraries, archives and audio-visual collections (www.europeana.eu). It promotes discovery and networking opportunities in a multilingual space where users can engage, share in and be inspired by the rich diversity of Europe's cultural and scientific heritage. The width of the endeavor is at the same time it’s limitation for researchers: although millions of heritage objects can be “explored”, the content and descriptions are oriented to the consumption by a general audience, not towards the analytical use of specialists. The European Holocaust Research Infrastructure, which is supported by DARIAH for solving the technological challenges of bringing together virtual resources from dispersed archives, is a good example for an infrastructure on the interface of heritage and historical research.
Conclusion:
The intention of the organisers of the Lisbon workshop on Digital Methods and Tools for Historical Research is to discuss the implications of using digital technologies in the production and dissemination of knowledge in History.
Two of the implications I have highlighted is the increase of scale of digital history projects and the need for research infrastructures to sustain the results of digital projects. Multidisciplinary and international collaboration is inevitable for professional results. Computational history is in this sense comparable to (of simply part of) data driven e-Science.
This conclusion is independent from the type of methodology we look at: whether it is relational databases, geographic information systems, (text) encoding or digitization and preservation of digital memory. Such methodologies rarely stand alone in a digital project, and are rather phases in the cycle that many digital projects go through: after digitization comes the encoding (in textual sources) or the structuring in databases. Analysis is the next phase, for which GISes are very useful in the case the data has a geospatial component, which can be visualised. At the end of the cycle, proper measures need to be taken in order to keep the results accessible for the future.