Paolo Monella Post-doc scholarship in Digital Humanities Accademia dei Lincei, Rome 2012

Cologne Dialogue on Digital Humanities 2012: a thoughtful report

Cologne Dialogue on Digital Humanities 2012: una relazione ed una riflessione

Versione Italiana / English Version

Conference website (with schedule, full-text papers, slides and online comments).

The (unedited) notes I took at the conference.

Home page del convegno (con il programma, il testo degli interventi, le presentazioni dei relatori ed i commenti aggiunti online agli interventi stessi).

Gli appunti che ho preso alla conferenza (in inglese).

The first of the Cologne Dialogues in Digital Humanities (23-24 April 2012) has been devised by its convenor, Manfred Thaller, in the fashion of medieval controversiae: seven topics were assigned each to a pair of scholars to debate, from different angles. After the talks of the two opponents, a discussion between them took place, followed by a larger one with the audience.

Il primo dei Cologne Dialogues in Digital Humanities (23-24 Aprile 2012) è stato concepito dal suo organizzatore, Manfred Thaller, nella forma delle controversiae medievali: sette temi sono stati assegnati ciascuno ad un paio di studiosi perché li dibattessero da diversi punti di vista.

The first day of the Conference (controversies 1-3) was meant to answer the question: "What are the Digital Humanities?", while the second day (controversies 4-7) was entitled: "Making the Digital Humanities work: Tools, infrastructures, technology and conceptual work".

Controversy 1: Do the Digital Humanities have an intellectual agenda or do they constitute an infrastructure?

Opponents:

Willard McCarty's answer to the question was: "Agenda before infrastructure". He argued that the Digital Humanities needs to follow a "humane project", not different from the general agenda of the Humanities. It has to find out what is unique of humans as opposed to animals and machines, and to study and foster that "residue of uniqueness".

A question like "What is the agenda of the Digital Humanities" is certainly very broad to answer – maybe too broad – and possible McCarty's approach of recalling the general aim of the Humanities was the only possible one, in front of such a general formulation.

However, a rather paradoxical implication of McCarty's argument is that the goal of the computer-assisted study of human artifacts is to study whatever is left of human creativity when everything that is computable is taken out (the human "residue" after a comparison with what also machines can do).

In other words, one could argue where the peculiar hermeneutical contribution of the Digital Humanities may lie, if any exists. This is, in fact, the side where the following discussion verged.

La risposta di Willard McCarty (The residue of uniqueness) si può riassumere in questi termini: l'"agenda" intellettuale viene prima della costruzione dell'"infrastruttura" tecnologica. McCarty ha ricondotto l'Informatica Umanistica – e quindi i suoi obiettivi — nell'ambito degli studi umanistici, fissando per entrambi il fine di studiare il "residuo" umano non riconducibile né all'animale né alla macchina.

Eppure, dalla sua pur interessante discussione non emerge quale sia il proprium dell'Informatica Umanistica come disciplina, un proprium che deve necessariamente avere a che fare proprio con il modo in cui le "macchine" informatiche funzionano.

Controversy 2: Are all approaches towards interdisciplinary research between the Humanities and Computer Science meaningfully represented by the current concept of Digital Humanities?

Controversy 2: Approaches towards interdisciplinary research

Opponents:

The two talks of this controversy, in my opinion, both proved, one willingly, the other uwillingly, the same point: that the Digital Humanities today are strongly centred on the Anglo-american community, its agenda, its prospective.

Susan Schreibman, the editor of the admirably vast Companion to Digital Humanities, has discussed the sections of the Companion, by acknowledging that in its first edition the contributors outside the USA and the UK were too few and by announcing that a new edition is underway, where a more diverse pool of contributors will be involved. In other words, the Companion has been taken as a microcosm representative of the macrocosm of all possible approaches to the Digital Humanities.

Domenico Fiormonte started off by declaring that he studies now both Digital Humanities and Sociology of the Digital Humanities. And indeed the latter was the topic of his talk, one of the most interesting of the Conference. After arguing – loud and clear – that the "methodological question" is what differentiates the Anglo-american approach from the rest of the community, he demonstrated with bare facts that "Non Anglo-american Digital Humanities are almost invisible", that international organisations and their boards are overwhelmingly dominated by scholars from USA, Canada and UK, and that this has a deep impact in the definition and the control of standards used to encode cultural artifacts. Very interestingly, he made the example of the Unicode representation of writing, being more "Western-centric" than one would think, if compared to some Indian writings.

The somewhat lame discussion that followed such potentially heating arguments demonstrates, in my view, that this debate risks to boil down to a protest of the European scholars for being kept off the Anglo-american-led table of the Digital Humanities. The worst risk is that the whole controversy, if tackled only under this angle, will vanish when Europe will join the table.

A rather more interesting issue emerged very clearly from Fiormonte's discussion on the Unicode encoding of Sanskrit writing: oversimplified modelling by Western scholars will inevitably tend to be "Western-centric", and therefore exclusive against the cultures not participating actively in the modelling process. As a consequence, sophisticated thinking – and complex modelling – on cultural phenomena is inclusive in itself. Which implies that we have an ethical – other than a scholarly – duty to theorise in a complex and sophisticated manner.

I due interventi di questa controversia hanno mostrato, da punti di vista diversi, la stessa realtà: l'egemonia anglosassone – a volte persino candidamente inconsapevole – sull'Informatica Umanistica.

Susan Schreibman (Digital Humanities: Centres and Peripheries) ha usato l'indice del Companion to Digital Humanities da lei curato come un microcosmo sufficiente ad illustrare tutti i principali approcci alla disciplina.

Domenico Fiormonte (Towards a Cultural Critique of the Digital Humanities) ha dimostrato, dati alla mano, come le grandi organizzazioni internazionali delle "Digital Humanities" siano di fatto egemonizzate da ristretti circoli di studiosi americani o britannici. Il risultato di questo, come è risultato evidente da un esempio relativo alla codifica Unicode del sanscrito, è che la modellizzazione digitale dei fenomeni culturali risulta fortemente centrata sul modello occidentale, e di fatto esclude o travisa le forme simboliche delle 'altre' culture.

Il rischio, temo, è che questo tipo di discussioni eviti il tema dei diversi approcci metodologici alla disciplina, e si riduca ad una richiesta da parte degli studiosi europei di essere invitati al tavolo delle "Digital Humanities" insieme ai loro colleghi anglo-americani.

Uno spunto di riflessione più interessante, però, può venir fuori dall'esempio della scrittura sanscrita (non lineare) portato da Fiormonte: finché gli studiosi di Informatica Umanistica saranno in prevalenza occidentali, una modellizzazione semplicistica dei fenomeni culturali (in questo caso della scrittura alfabetica) porterà inevitabilmente a modelli 'occidentali', con l'effetto di escludere le altre culture.

Questo trasforma il dovere intellettuale di pensare e modellizzare in modo complesso in un dovere etico, al fine di includere fenomeni culturali 'lontani' dallo studioso.

Controversy 3: What is the scope of the Digital Humanities? What is the relationship between individual disciplines served by them?

Opponents:

Jeremy Huggett, a digital archeologist, discussed the relationship between Archeology, Digital Archeology and Digital Humanities, and demonstrated that Digital Humanities today are "text-centric" and divided into sub-disciplines that do not always acknowdege each other.

Interestingly, as soon as the right questions are posed, as Manfred Thaller has done in this Conference, the glossy and irenic surface of the Digital Humanities discourse reveals invisible boundaries and latent fractures.

Jan Christoph Meister tackled the issue from a more theoretical perspective. He interestingly discussed the concepts of "discipline" and of "digital". Eventually, he concluded that not all humanistic questions can (or are worth) being tackled digitally today and that disciplines are a construct of the XIX Century doomed to vanishing, also "through the use of a new conceptual lingua franca: digital conceptualisation".

In this case, the discussion following was even more interesting than the talks themselves, as it circled (rather than tackling directly) one basic issue: what is computable in the Humanities? Is it worth it having a computer process it automatically, rather than letting humans scrutinise it traditionally?

Jeremy Huggett, un archeologo, ha discusso i rapporti tra Archeologia, Archeologia Digitale e Informatica Umanistica, accusando quest'ultima di essere troppo "testo-centrica" e di marginalizzare sotto-discipline, come appunto l'Archeologia Digitale, non basate su materiale testuale (Core or Periphery? Digital Humanities from an Archaeological Perspective).

La questione, in realtà, meritava probabilmente di essere affrontata in modo teoricamente più approfondito, come ha fatto Jan Christoph Meister (Digital Humanities is us, or on the unbearable lightness of a shared methodology). Quest'ultimo ha dapprima esposto le sue definizioni di "disciplina" e di "digitale", per concludere che le distinzioni disciplinari sono una costruzione dell'Ottocento, destinata a sparire, e che non tutte le questioni umanistiche sono adatte (allo stato attuale delle nostre conoscenze) ad essere affrontate con metodi informatici.

La discussione seguente è stata, come talvolta accade, più interessante degli stessi interventi, in quanto ha sfiorato (pur senza affrontarlo direttamente) un tema-chiave: cosa è computabile, negli studi umanistici, e cosa non lo è? Probabilmente si è trattato di un'occasione mancata per una riflessione fondamentale, ma spesso evitata in favore di considerazioni legate all'"infrastruttura" della ricerca.

Discussion on the publication of the Conference materials

The publication of the Conference materials, discussed at the end of the first day of the Conference, is worth a note apart. The recent volume Debates in the Digital Humanities was very successfull in sparkling a broad discussion in the form of conference talks related to the volume content, blog posts, comments to those posts and twittage.

The model chosen by Manfred Thaller to achieve a similar goal is slightly more centralised, but equally interesting. The starting point here is the conference itsel, and the pivot of the whole process is the Conference website, that even stated the Twitter hashtag, #CologneDialogues.

I myself saw a very efficient young webmaster asking all speakers during the Conference for their revised papers and their slides. These were thus uploaded to the Conference website in real time. The website is expected to shortly host all full-text papers, presentation slides and eventually even the video recording of all talks. Immediately after the conference, invited speakers and the audience will be asked to post their comments to the talks to the website, and after a period of virtual discussion online the proceedings will be printed – certainly before the next Dialogue (December 2012).

Controversy 4: What is the appropriate role of markup?

Opponents:

While the first day of the Conference dealt with the disciplinary definition of the Digital Humanities, the second one was meant to address more specific issues, as it was entitled "Making the Digital Humanities work: Tools, infrastructures, technology and conceptual work".

Controversy 4, the first of the second day, addressed an issue dear to the theoretical reflection of many Italian digital humanitists and close to the research lines of the Centro Linceo Interdisciplinare "B. Segre": the digital modelling of text.

Espen Ore stated that even word separation in rune stone inscriptions, the obelisks of ancient Alexandrian philologists and modern typographical features like identation can be considered a form of markup. He thus argued in favour of the central role of markup in any form of text encoding – not only digital.

However, his discussion later focussed on a less interesting topic: XML/TEI inline vs. stand-off markup. Inline markup, he showed, is fully interoperable, while stand-off markup makes overlapping hierarchies easier to deal with and can be exported to a relational database, but does not provide any standard for data interchange.

It seems to me that the inline vs. stand-off markup issue, albeit quite banal in itself, exposes the limits of XML/TEI. The latter is meant to provide both a tool for "rich" markup and a highly interoperable standard. However, Espen Ore's case study of the Ibsen Letters Project showed that a really rich markup demands for external markup and relational databases, which in their turn hinders full interoperability. Ore himself stated that a possible solution is to use stand-off markup inside the project and then "Project the data into a suitable hierarchy for XML (TEI) export". In other words, there is a risk that the really "rich" markup is dealt within the project and are lost when exporting data in XML/TEI – the latter degraded to a bare exchange format. Do real implementations of XML/TEI imply a tension between "rich" markup and interoperability? If so, the primary goal of XML/TEI would be missed.

To me, the most interesting part of Ore's talk was his report of a decades-long work on an actual XML/TEI project, the Ibsen Letters. Among the interesting practicalities that Ore reported, I was struck by "fossile markup". The project started very early, while TEI P3 was the recommended standard. As the TEI guidelines evolved, older files encoded with P3 remained unchanged and kept trace of P3 markup no longer implemented in P4 and P5: tags no longer used, but not harmful – "fossile", in Ore's definition.

This apparently trivial practicality offered Willard McCarty and Tito Orlandi, during the subsequent discussion, a chance to stress the interpretive and subjective aspect of all markup. In fact, we should resist the templation to think that XML/TEI P5, or whatever the most recent form of digital philological markup is, is "the end of the history" of philology. McCarty provokingly argued that "Markup is only (maybe) useful for undisputable features, like line numbers, but not for interpretive features, as the latter are subjective". Tito Orlandi replied that all interpretations of a text (including essays of literary criticism) are a form of markup. While reading a literary essay, we are aware that we need to understand its rationale. In the same way, when we use markup modelled by a previous digital humanist we have to put some effort in understanding the rationale under their markup.

That a "markup criticism" will be increasingly part of the skills of future (digital) philologists is an obvious reflection, but it sounds revolutionary to those who would like to use technology (XML/TEI in this case) to eschew innerly humanistic issues – like that of the subjective nature of interpretation – rather than to tackle them better.

Desmond Schmidt, the second speaker of this controversy, carried on a very bright analysis of six major problems of XML/TEI embedded markup (excessive tag number, usability, overlapping hierarchies, interlinking, textual variation, interoperability). After a very convincing pars destruens, he proposed the design of a possible solution, involving automating computing of textual variation (a model called MVD, or Multi-version document) through Nmerge, and separating "markup set" files (including "Stand-off properties") from text files.

Schmidt's model is particularly interesting for those interested in textual variation, a key feature of multi-testimonial textual variation poorly represented in XML/TEI markup. Also, his MVD model allows great flexibility in mixing "markup sets" in order to overcome the typical XML/TEI "overlapping" issue and to leave researchers free to devise the most complex forms of markup even for multi-version texts. Also, it correctly separates markup and "text", thus improving interoperability.

A problem, however, lies right here, and has perfectly been pointed out by Dino Buzzetti during the discussion: what is "text"? Schmidt's data model separates files including "markup sets" from files including "text". To put it in Buzzetti's words: "My worry is that we do not have a clear idea of what text is. How do you relate features that relate to the 'physical' layer of text encoding with properties that refer to content?".

In slide 52, Schmidt asks: "Are there any principles in the methodology of dealing with markup and texts that aren’t linked to a specific technology?". One of those principles, notably not addressed by XML/TEI nor, seemingly, by Schmidt's model, is a clear distinction between "document" and "text". The "document" level, for instance, may be represented by the peculiar graphical representation of a text in a medieval manuscript (including abbreviations, punctuation, page space arrangement etc.), while the "text" level coincides with the content of the literary work that that graphical system of signs aims to represent. Without such a clear modelling distinction, that should be implemented by the data model by separating "document" files and "text" files, no serious digital critical edition of multi-testimonial textual traditions is conceivable.

Questa controversia ha affrontato un tema centrale nelle ricerche di Informatica Umanistica del Centro Linceo Interdisciplinare: la modellizzazione digitale del testo.

L'intervento di Espen Ore (Document Markup – Why? How?) avrebbe dovuto difendere il modello di markup testuale XML/TEI, oggi lo standard de facto. In realtà, il suo intervento ha contribuito a mostrare i limiti di tale modello: molti di tali limiti sono emersi proprio dalla sua relazione su più di vent'anni di lavoro al progetto di digitalizzazione delle lettere di Ibsen. Lo stesso Ore ha ammesso che le esigenze pratiche di un markup testuale 'ricco' richiedono che esso sia codificato separatamente dal testo ("stand-off markup"), in modo da permettere la sua gestione tramite database relazionali. Si perde però così, ammette Ore, in interoperabilità.

Se però il senso del progetto XML/TEI consiste proprio nella promessa di coniugare la 'ricchezza' del markup con l'interoperabilità di uno standard condiviso, ammettere che esista una dicotomia tra markup complesso e interoperabilità non è forse l'ammissione di un fallimento, o almeno dell'esistenza di un problema?

Ancor più interessante è un altro tema sollevato nella discussione successiva all'intervento: la natura soggettiva del markup stesso. Willard McCarty ha provocatoriamente chiesto se il markup debba essere limitato ai soli dati 'indisputabili' (come i numeri di verso), escludendo ogni forma di markup 'interpretativo'. Tito Orlandi ha risposto che ogni forma di markup è interpretativo, e quindi soggettivo: dobbiamo dunque essere preparati all'idea che ogni filologo che lavori sul markup di un filologo precedente debba a sua volta intepretare il rationale di tale markup, così come farebbe quando leggesse un saggio scritto intorno ad un testo letterario.

Che una qualche forma di "critica del markup" sia destinata a costituire parte integrante delle competenze di un futuro filologo (digitale) è in sé un'osservazione banale, ma evidentemente essa suona rivoluzionaria a quanti vorrebbero usare la tecnologia (lo standard XML/TEI, in questo caso), per evitare le questioni fondamentali dell'ecdotica e dell'ermeneutica, piuttosto che per affrontarle meglio.

Nel suo intervento (The Role of Markup in the Digital Humanities), Desmond Schmidt ha dimostrato assai efficacemente i limiti intrinseci del markup XML/TEI, proponendo un modello alternativo (MVD, Multi-version document, con "markup sets" separati dal testo). Esso si è mostrato subito chiaramente più flessibile del modello XML/TEI nell'attribuire "set di markup" non necessariamente gerarchico a testi 'plurali' (ad es. rifratti in diverse versioni d'autore).

Il suo modello , peraltro di grande interesse per chi si occupi di testi 'plurali' come, ad esempio, quelli tramandati da tradizioni manoscritte pluritestimoniali, risulta viziato dalla mancanza di una distinzione a monte tra "testo" e "documento". È questo il 'peccato originale' della codifica XML/TEI, che sembra essere ereditato anche dalla peraltro interessante riflessione di Schmidt.

Controversy 5: Big structures or lightweight webs. What is the most sensible technical template for research infrastructures for the Digital Humanities?

Controversy 5: Which type of infrastructure?

Opponents:

This controversy, like the second one, dealt much about the "Sociology" of the Digital Humanities scholarly community, namely about the best model for scholarly collaboration. The tension seems to be between

  1. large organisations (like DARIAH and CLARIN) meant to attract funds, coordinate efforts and reach out for 'traditional' Humanities scholars (as argued by Sheila Anderson), and
  2. a more 'horizontal' collaboration model involving practitioners independently working on smaller-scale interoperable, modular and open source projects, as exemplified by the ThatCAMP meetings and the InterEdition network (in favour of which Joris van Zundert, an exponent of Interedition, argued very heartily).

During the animated discussion that followed Anderson scored two main points in favour of larger organisations like DARIAH: she demonstrated their capacity to attract funds and to reach out for 'traditional' scholars. Willard McCarty, however, soon warned against the risk to rely on a "funding argument" (not arguing that something should be funded because it's good, but that it's good because it's getting funding).

For the rest, van Zundert was very effective in demonstrating that these organisations drain much of the available funding, leaving independent researchers and isolated institutions in a difficult position, and that projects like InterEdition are achieving impressive results through their 'horizontal' collaboration model, while avoiding the risk to build large centralised infrastructures that end up being – in his own words — "huge highways that nobody uses".

La quinta controversia è tornata al tema, già affrontato dalla seconda, dell'organizzazione della comunità degli informatici umanisti.

Sheila Anderson (Taking the Long View: From e-Science Humanities to Humanities Digital Ecosystems) ha argomentato in favore delle grandi organizzazioni internazionali di informatica umanistica (come DARIAH o CLARIN), in grado di porsi come punto di riferimento per i flussi di finanziamenti, per la costruzione di grandi infrastrutture digitali e per i singoli studiosi 'non digitali' che vogliano accostarsi alle metodologie e agli strumenti digitali.

Joris van Zundert (If you build it, will we come? Large scale digital infrastructures as a dead end for digital humanities), esponente della rete collaborativa di programmatori umanisti InterEdition, ha efficacemente dimostrato alcune tesi di segno opposto: il drenaggio di fondi da parte delle grandi istituzioni sottrae risorse ai ricercatori indipendenti e alle istituzioni isolate; un modello di collaborazione più 'orizzontale' come appunto quello di InterEdition può risultare più efficace di quello 'verticale' gestito dalle grandi organizzazioni; infine, queste ultime corrono il rischio di costruire "enormi autostrade (digitali) che nessuno percorre".

Controversy 6: “Digital curation” or “digital preservation” is a topic, which has originated within the world of digital libraries recently it has been drawn closer and closer to the Digital Humanities. Using its as example: What is the proper balance between conceptual work and technology?

Controversy 6: “Digital curation” or “digital preservation”

Opponents:

The two opponents of this controversy have embodied very well the distant approaches that may be involved by the terms "curation" (as a professional librarian's cultural work) and "preservation" (meant, in the most restricted sense, as a practical, technological issue).

Helen Tibbo stated that in the future all curation will be digital, that "Digital curation" may be considered a discipline of its own and that technology is one of the requirements for Digital curation, but only the last of a long list of "conceptual" skills.

Henry Gladney could not have tackled the issue from a more distant approach. He illustrated a digital model for a new generation of files (TDO, Trustworthy Digital Objects), engineered to resist falsification or any kind of information loss.

Gladney had stated that "The issue of digital preservation has already been solved from a Computer Sciences perspective". This may be true, but the following discussion showed very easily and from many angles that long-term preservation is not only a technological issue, but eminently a social and cultural one.

Helen Tibbo (Placing the Horse before the Cart: Conceptual and Technical Dimensions of Digital Curation) ha previsto che in futuro tutti i curatori del patrimonio culturale saranno curatori digitali, e che le competenze propriamente tecnologiche costituiranno solo una minima parte minima delle professionalità richieste a questa nuova generazione di bibliotecari.

Henry Gladney ha proposto un modello puramente tecnico (come egli stesso ha sottolineato più volte) per creare una nuova generazione di file destinati a preservare nel lungo termine il loro contenuto (testo, immagini, musica etc.) da falsificazioni o danneggiamenti (Long-Term Digital Preservation: a Digital Humanities Topic?).

Con buona pace di Gladney, però, la discussione seguente ha avuto buon gioco nel dimostrare, da diversi punti di vista, come la preservazione di lungo termine degli oggetti culturali sia molto più una questione sociale e culturale che un problema tecnico di codifica dell'informazione.

Controversy 7: "Digital Libraries" have started their life as an answer to opportunities created by a specific stage of technical development. Where are they now, between Computer Science and the Digital Humanities?

Controversy 7: Digital Libraries

Opponents:

Hans-Christoph Hobohm, in the closing talk of the Conference, said that digital libraries are today mostly 'static'. They deliver information to a 'user' in the terms of Shannon-Weaver's linear model of communication. Hobohm's question is: "How can we make the information stored in (digital) libraries become knowledge?". His answer lies in the concept of "blended library": a phisical place where users interact with digital objects by using their body. This model takes into account the fact that cognitive processes happen through our body.

Hans-Christoph Hobohm (Do digital libraries generate knowledge?) ha chiuso il convegno proponendo che le biblioteche digitali diventino spazi fisici in cui i lettori interagiscano con gli oggetti digitali con tutto il loro corpo ("blended library").

General considerations

Considerazioni generali

I think that it may be said that this Conference was probably better devised by Manfred Thaller than realised by its speakers.

Some central, methodological issues (like the relationship between Humanities and Computer Science, or the idea of text lying behind our digital modelling) have been rather alluded to than fully tackled, while much of the discussion has dealt with the organisation of the Digital Humanities scholarly community.

This is understandable, as the Digital Humanities are undergoing a phase of "institutionalisation" that makes the disciplinary and organisational issues urgent. However, it may be said that the theoretical complexity of many of Thaller's questions has sometimes been eschewed.

Also, it was very difficult for speakers to organise their talks "dialogically", that is to fit their arguments into the very sophisticated dialogic structure of the Conference.

Nevertheless, many talks included stimulating reflections. Among the others, one could mention the interesting discussions on inclusiveness by Fiormonte, on disciplinarity by Meister, on textual variance and markup by Schmidt, on 'horizontal' collaboration by van Zundert.

As it is often the case, the discussions were often at least as interesting than the talks. But in this case this did not happen by accident, but as a result of the "dialogic" organisation of the meeting by Manfred Thaller, who allowed much time for debate and provided participants with a further occasion to discuss the topics by commenting on the papers online.

Credo si possa affermare, nel complesso, che il convegno sia stato meglio pensato da Manfred Thaller che realizzato dai relatori.

Una serie di questioni centrali dell'Informatica Umanistica, pur sollevate dai titoli delle controversie, sono state più sfiorate che affrontate compiutamente, mentre la discussione ha orbitato piuttosto sugli aspetti 'organizzativi' del mondo degli studi informatico-umanistici.

Questa tendenza di molti dibattiti attuali può essere dovuta alla rapida e tumultuosa fase di 'istituzionalizzazione' dell'Informatica Umanistica, che stimola dibattiti di autodefinizione disciplinare e di organizzazione interna.

D'altra parte, era in effetti difficile affrontare in modo veramente 'dialogico', cioè complesso, problematico, le fondamentali questioni messe sul tavolo da Thaller.

Restano però gli interessanti spunti di riflessione suscitati, per citarne solo alcuni, da Fiormonte (sulla inclusività della nostra modellizzazione digitale), Meister (sulle definizioni disciplinari), Schmidt (sulla varianza testuale e il markup), van Zundert (sulla collaborazione 'orizzontale').

Come suole accadere, le discussioni sono state forse più interessanti delle stesse relazioni. Stavolta però questo non è avvenuto per caso, ma per la precisa scelta di Manfred Thaller di organizzare l'intero incontro in forma dialogica e di prevedere una prosecuzione di tale dialogo online, tramite i commenti agli interventi che è possibile pubblicare sul ricco sito del convegno (http://www.cceh.uni-koeln.de/events/CologneDialogue).