Publications

2019

*

Transforming scholarship in the archives through handwritten text recognition Transkribus as a case study

G. Muehlberger; L. Seaward; M. Terras; S. Ares Oliveira; V. Bosch et al.

Journal Of Documentation. 2019-09-09.

DOI : 10.1108/JD-07-2018-0114.

Purpose An overview of the current use of handwritten text recognition (HTR) on archival manuscript material, as provided by the EU H2020 funded Transkribus platform. It explains HTR, demonstrates Transkribus, gives examples of use cases, highlights the affect HTR may have on scholarship, and evidences this turning point of the advanced use of digitised heritage content. The paper aims to discuss these issues. Design/methodology/approach This paper adopts a case study approach, using the development and delivery of the one openly available HTR platform for manuscript material. Findings Transkribus has demonstrated that HTR is now a useable technology that can be employed in conjunction with mass digitisation to generate accurate transcripts of archival material. Use cases are demonstrated, and a cooperative model is suggested as a way to ensure sustainability and scaling of the platform. However, funding and resourcing issues are identified. Research limitations/implications - The paper presents results from projects: further user studies could be undertaken involving interviews, surveys, etc. Practical implications - Only HTR provided via Transkribus is covered: however, this is the only publicly available platform for HTR on individual collections of historical documents at time of writing and it represents the current state-of-the-art in this field. Social implications The increased access to information contained within historical texts has the potential to be transformational for both institutions and individuals. Originality/value This is the first published overview of how HTR is used by a wide archival studies community, reporting and showcasing current application of handwriting technology in the cultural heritage sector.

*

A deep learning approach to Cadastral Computing

S. Ares Oliveira; I. di Lenardo; B. Tourenc; F. Kaplan

2019-07-11. Digital Humanities Conference, , Utrecht, Netherlands , July 8-12, 2019.

This article presents a fully automatic pipeline to transform the Napoleonic Cadastres into an information system. The cadastres established during the first years of the 19th century cover a large part of Europe. For many cities they give one of the first geometrical surveys, linking precise parcels with identification numbers. These identification numbers points to registers where the names of the proprietary. As the Napoleonic cadastres include millions of parcels , it therefore offers a detailed snapshot of large part of Europe’s population at the beginning of the 19th century. As many kinds of computation can be done on such a large object, we use the neologism “cadastral computing” to refer to the operations performed on such datasets. This approach is the first fully automatic pipeline to transform the Napoleonic Cadastres into an information system.

*

Frederic Kaplan Isabella di Lenardo

F. Kaplan; I. di Lenardo

Apollo-The International Art Magazine. 2019-01-01.

2018

*

Informatica per Umanisti: da Venezia al mondo intero attraverso l’Europa

D. Rodighiero

Conferenza per la Società Dante Alighieri, University of Bern, Switzerland, December 10, 2018.

In un momento di apertura del mondo scientifico verso un pubblico più ampio, questa conferenza vuole essere una facile introduzione alle digital humanities. L’argomento del conferenza è infatti l’informatica per umanisti, un nuovo ambito di ricerca che arricchisce le discipline umanistiche attraverso l’uso di nuove tecnologie. La mia esperienza personale sarà il filo conduttore di questa introduzione e la conferenza sarà l’occasione per parlare dei progetti ai quali ho contribuito nel corso degli ultimi cinque anni. Da Parigi a Venezia, da Losanna a Boston, fare ricerca vuol dire fare esperienze in tutto il mondo. Parlerò di Bruno Latour e dei suoi modi d’esistenza, di Frédéric Kaplan e della sua macchina del tempo, di Franco Moretti e della sua lettura a distanza, e di Marilyne Andersen e della sua cartografia delle affinità, tutte persone che ho avuto il piacere di incontrare e hanno arricchito il mio percorso accademico. Attraverso un racconto visuale fatto di immagini e video, vi spiegherò come le Digital Humanities possono rendere archivi, musei e biblioteche luoghi più interessanti per tutti.

*

dhSegment : A generic deep-learning approach for document segmentation

S. Ares Oliveira; B. L. A. Seguin; F. Kaplan

The 16th International Conference on Frontiers in Handwriting Recognition, Niagara Falls, USA, 5-8 August 2018.

*

Comparing human and machine performances in transcribing 18th century handwritten Venetian script

S. Ares Oliveira; F. Kaplan

2018-07-26. Digital Humanities Conference , Mexico City, Mexico , June 24-29, 2018.

Automatic transcription of handwritten texts has made important progress in the recent years. This increase in performance, essentially due to new architectures combining convolutional neural networks with recurrent neutral networks, opens new avenues for searching in large databases of archival and library records. This paper reports on our recent progress in making million digitized Venetian documents searchable, focusing on a first subset of 18th century fiscal documents from the Venetian State Archives. For this study, about 23’000 image segments containing 55’000 Venetian names of persons and places were manually transcribed by archivists, trained to read such kind of handwritten script. This annotated dataset was used to train and test a deep learning architecture with a performance level (about 10% character error rate) that is satisfactory for search use cases. This paper compares this level of reading performance with the reading capabilities of Italian-speaking transcribers. More than 8500 new human transcriptions were produced, confirming that the amateur transcribers were not as good as the expert. However, on average, the machine outperforms the amateur transcribers in this transcription tasks.

*

The Scholar Index: Towards a Collaborative Citation Index for the Arts and Humanities

G. Colavizza; M. Romanello; M. Babetto; V. Barbay; L. Bolli et al.

Mexico City, 26-29 June 2018.

*

Deep Learning for Logic Optimization Algorithms, 2018 IEEE International Symposium on Circuits and Systems (ISCAS)

W. J. Haaswijk; E. Collins; B. Seguin; M. Soeken; F. Kaplan et al.

2018-05-27. 2018 IEEE International Symposium on Circuits and Systems (ISCAS) , Florence, Italy , May 27-30, 2018. p. 1-4.

DOI : 10.1109/ISCAS.2018.8351885.

The slowing down of Moore's law and the emergence of new technologies puts an increasing pressure on the field of EDA. There is a constant need to improve optimization algorithms. However, finding and implementing such algorithms is a difficult task, especially with the novel logic primitives and potentially unconventional requirements of emerging technologies. In this paper, we cast logic optimization as a deterministic Markov decision process (MDP). We then take advantage of recent advances in deep reinforcement learning to build a system that learns how to navigate this process. Our design has a number of desirable properties. It is autonomous because it learns automatically and does not require human intervention. It generalizes to large functions after training on small examples. Additionally, it intrinsically supports both single- and multi-output functions, without the need to handle special cases. Finally, it is generic because the same algorithm can be used to achieve different optimization objectives, e.g., size and depth.

*

Mapping Affinities in Academic Organizations

D. Rodighiero; F. Kaplan; B. Beaude

Frontiers in Research Metrics and Analytics. 2018-02-19.

DOI : 10.3389/frma.2018.00004.

Scholarly affinities are one of the most fundamental hidden dynamics that drive scientific development. Some affinities are actual, and consequently can be measured through classical academic metrics such as co-authoring. Other affinities are potential, and therefore do not leave visible traces in information systems; for instance, some peers may share interests without actually knowing it. This article illustrates the development of a map of affinities for academic collectives, designed to be relevant to three audiences: the management, the scholars themselves, and the external public. Our case study involves the School of Architecture, Civil and Environmental Engineering of EPFL, hereinafter ENAC. The school consists of around 1,000 scholars, 70 laboratories, and 3 institutes. The actual affinities are modeled using the data available from the information systems reporting publications, teaching, and advising scholars, whereas the potential affinities are addressed through text mining of the publications. The major challenge for designing such a map is to represent the multi-dimensionality and multi-scale nature of the information. The affinities are not limited to the computation of heterogeneous sources of information; they also apply at different scales. The map, thus, shows local affinities inside a given laboratory, as well as global affinities among laboratories. This article presents a graphical grammar to represent affinities. Its effectiveness is illustrated by two actualizations of the design proposal: an interactive online system in which the map can be parameterized, and a large-scale carpet of 250 square meters. In both cases, we discuss how the materiality influences the representation of data, in particular the way key questions could be appropriately addressed considering the three target audiences: the insights gained by the management and their consequences in terms of governance, the understanding of the scholars’ own positioning in the academic group in order to foster opportunities for new collaborations and, eventually, the interpretation of the structure from a general public to evaluate the relevance of the tool for external communication.

*

Negentropic linguistic evolution: A comparison of seven languages

V. Buntinx; F. Kaplan

2018. Digital Humanities 2018 , Mexico City, Mexico , June 26-29, 2018.

The relationship between the entropy of language and its complexity has been the subject of much speculation – some seeing the increase of linguistic entropy as a sign of linguistic complexification or interpreting entropy drop as a marker of greater regularity. Some evolutionary explanations, like the learning bottleneck hypothesis, argues that communication systems having more regular structures tend to have evolutionary advantages over more complex structures. Other structural effects of communication networks, like globalization of exchanges or algorithmic mediation, have been hypothesized to have a regularization effect on language. Longer-term studies are now possible thanks to the arrival of large-scale diachronic corpora, like newspaper archives or digitized libraries. However, simple analyses of such datasets are prone to misinterpretations due to significant variations of corpus size over the years and the indirect effect this can have on various measures of language change and linguistic complexity. In particular, it is important not to misinterpret the arrival of new words as an increase in complexity as this variation is intrinsical, as is the variation of corpus size. This paper is an attempt to conduct an unbiased diachronic study of linguistic complexity over seven different languages using the Google Books corpus. The paper uses a simple entropy measure on a closed, but nevertheless large, subset of words, called kernels. The kernel contains only the words that are present without interruption for the whole length of the study. This excludes all the words that arrived or disappeared during the period. We argue that this method is robust towards variations of corpus size and permits to study change in complexity despite possible (and in the case of Google Books unknown) change in the composition of the corpus. Indeed, the evolution observed on the seven different languages shows rather different patterns that are not directly correlated with the evolution of the size of the respective corpora. The rest of the paper presents the methods followed, the results obtained and the next steps we envision.

*

dhSegment: A generic deep-learning approach for document segmentation

S. A. Oliveira; B. Seguin; F. Kaplan

2018-01-01. 16th International Conference on Frontiers in Handwriting Recognition (ICFHR) , Niagara Falls, NY , Aug 05-08, 2018. p. 7-12.

DOI : 10.1109/ICFHR-2018.2018.00011.

In recent years there have been multiple successful attempts tackling document processing problems separately by designing task specific hand-tuned strategies. We argue that the diversity of historical document processing tasks prohibits to solve them one at a time and shows a need for designing generic approaches in order to handle the variability of historical series. In this paper, we address multiple tasks simultaneously such as page extraction, baseline extraction, layout analysis or multiple typologies of illustrations and photograph extraction. We propose an open-source implementation of a CNN-based pixel-wise predictor coupled with task dependent post-processing blocks. We show that a single CNN-architecture can be used across tasks with competitive results. Moreover most of the task-specific post-precessing steps can be decomposed in a small number of simple and standard reusable operations, adding to the flexibility of our approach.

*

Deep Learning for Logic Optimization Algorithms

W. Haaswijk; E. Collins; B. Seguin; M. Soeken; F. Kaplan et al.

2018-01-01. IEEE International Symposium on Circuits and Systems (ISCAS) , Florence, ITALY , May 27-30, 2018.

The slowing down of Moore's law and the emergence of new technologies puts an increasing pressure on the field of EDA. There is a constant need to improve optimization algorithms. However, finding and implementing such algorithms is a difficult task, especially with the novel logic primitives and potentially unconventional requirements of emerging technologies. In this paper, we cast logic optimization as a deterministic Markov decision process (MDP). We then take advantage of recent advances in deep reinforcement learning to build a system that learns how to navigate this process. Our design has a number of desirable properties. It is autonomous because it learns automatically and does not require human intervention. It generalizes to large functions after training on small examples. Additionally, it intrinsically supports both single-and multioutput functions, without the need to handle special cases. Finally, it is generic because the same algorithm can be used to achieve different optimization objectives, e. g., size and depth.

*

Making large art historical photo archives searchable

B. L. A. Seguin / F. Kaplan; I. di Lenardo (Dir.)

Lausanne, EPFL, 2018.

DOI : 10.5075/epfl-thesis-8857.

In recent years, museums, archives and other cultural institutions have initiated important programs to digitize their collections. Millions of artefacts (paintings, engravings, drawings, ancient photographs) are now represented in digital photographic format. Furthermore, through progress in standardization, a growing portion of these images are now available online, in an easily accessible manner. This thesis studies how such large-scale art history collection can be made searchable using new deep learning approaches for processing and comparing images. It takes as a case study the processing of the photo archive of the Foundation Giorgio Cini, where more than 300'000 images have been digitized. We demonstrate how a generic processing pipeline can reliably extract the visual and textual content of scanned images, opening up ways to efficiently digitize large photo-collections. Then, by leveraging an annotated graph of visual connections, a metric is learnt that allows clustering and searching through artwork reproductions independently of their medium, effectively solving a difficult problem of cross-domain image search. Finally, the thesis studies how a complex Web Interface allows users to perform different searches based on this metric. We also evaluate the process by which users can annotate elements of interest during their navigation to be added to the database, allowing the system to be trained further and give better results. By documenting a complete approach on how to go from a physical photo-archive to a state-of-the-art navigation system, this thesis paves the way for a global search engine across the world's photo archives.

*

The Intellectual Organisation of History

G. Colavizza / F. Kaplan; M. Franceschet (Dir.)

Lausanne, EPFL, 2018.

DOI : 10.5075/epfl-thesis-8537.

A tradition of scholarship discusses the characteristics of different areas of knowledge, in particular after modern academia compartmentalized them into disciplines. The academic approach is often put to question: are there two or more cultures? Is an ever-increasing specialization the only way to cope with information abundance or are holistic approaches helpful too? What is happening with the digital turn? If these questions are well studied for the sciences, our understanding of how the humanities might differ in their own respect is far less advanced. In particular, modern academia might foster specific patterns of specialization in the humanities. Eventually, the recent rise in the application of digital methods to research, known as the digital humanities, might be introducing structural adaptations through the development of shared research technologies and the advent of organizational practices such as the laboratory. It therefore seems timely and urgent to map the intellectual organization of the humanities. This investigation depends on few traits such as the level of codification, the degree of agreement among scholars, the level of coordination of their efforts. These characteristics can be studied by measuring their influence on the outcomes of scientific communication. In particular, this thesis focuses on history as a discipline using bibliometric methods. In order to explore history in its complexity, an approach to create collaborative citation indexes in the humanities is proposed, resulting in a new dataset comprising monographs, journal articles and citations to primary sources. Historians' publications were found to organize thematically and chronologically, sharing a limited set of core sources across small communities. Core sources act in two ways with respect to the intellectual organization: locally, by adding connectivity within communities, or globally as weak ties across communities. Over recent decades, fragmentation is on the rise in the intellectual networks of historians, and a comparison across a variety of specialisms from the human, natural and mathematical sciences revealed the fragility of such networks across the axes of citation and textual similarities. Humanists organize into more, smaller and scattered topical communities than scientists. A characterisation of history is eventually proposed. Historians produce new historiographical knowledge with a focus on evidence or interpretation. The former aims at providing the community with an agreed-upon factual resource. Interpretive work is instead mainly focused on creating novel perspectives. A second axe refers to two modes of exploration of new ideas: in-breadth, where novelty relates to adding new, previously unknown pieces to the mosaic, or in-depth, if novelty then happens by improving on previous results. All combinations possible, historians tend to focus on in-breadth interpretations, with the immediate consequence that growth accentuates intellectual fragmentation in the absence of further consolidating factors such as theory or technologies. Research on evidence might have a different impact by potentially scaling-up in the digital space, and in so doing influence the modes of interpretation in turn. This process is not dissimilar to the gradual rise in importance of research technologies and collaborative competition in the mathematical and natural sciences. This is perhaps the promise of the digital humanities.

*

Mapping affinities: visualizing academic practice through collaboration

D. Rodighiero / F. Kaplan; B. Beaude (Dir.)

EPFL, 2018.

DOI : 10.5075/epfl-thesis-8242.

Academic affinities are one of the most fundamental hidden dynamics that drive scientific development. Some affinities are actual, and consequently can be measured through classical academic metrics such as co-authoring. Other affinities are potential, and therefore do not have visible traces in information systems; for instance, some peers may share scientific interests without actually knowing it. This thesis illustrates the development of a map of affinities for scientific collectives, which is intended to be relevant to three audiences: the management, the scholars themselves, and the external public. Our case study involves the School of Architecture, Civil and Environmental Engineering of EPFL, which consists of three institutes, seventy laboratories, and around one thousand employees. The actual affinities are modeled using the data available from the academic systems reporting publications, teaching, and advising, whereas the potential affinities are addressed through text mining of the documents registered in the information system. The major challenge for designing such a map is to represent the multi-dimensionality and multi-scale nature of the information. The affinities are not limited to the computation of heterogeneous sources of information, they also apply at different scales. Therefore, the map shows local affinities inside a given laboratory, as well as global affinities among laboratories. The thesis presents a graphical grammar to represent affinities. This graphical system is actualized in several embodiments, among which a large-scale carpet of 250 square meters and an interactive online system in which the map can be parameterized. In both cases, we discuss how the actualization influences the representation of data, in particular the way key questions could be appropriately addressed considering the three target audiences: the insights gained by the management and the relative decisions, the understanding of the researchers’ own positioning in the academic collective that might reveal opportunities for new synergies, and eventually the interpretation of the structure from an external standpoint that suggesting the relevance of the tool for communication.

2017

*

Layout Analysis on Newspaper Archives

V. Buntinx; F. Kaplan; A. Xanthos

2017. Digital Humanities 2017 , Montreal, Canada , August 8-11, 2017.

The study of newspaper layout evolution through historical corpora has been addressed by diverse qualitative and quantitative methods in the past few years. The recent availability of large corpora of newspapers is now making the quantitative analysis of layout evolution ever more popular. This research investigates a method for the automatic detection of layout evolution on scanned images with a factorial analysis approach. The notion of eigenpages is defined by analogy with eigenfaces used in face recognition processes. The corpus of scanned newspapers that was used contains 4 million press articles, covering about 200 years of archives. This method can automatically detect layout changes of a given newspaper over time, rebuilding a part of its past publishing strategy and retracing major changes in its history in terms of layout. Besides these advantages, it also makes it possible to compare several newspapers at the same time and therefore to compare the layout changes of multiple newspapers based only on scans of their issues.

*

Machine Vision Algorithms on Cadaster Plans

S. Ares Oliveira; I. di Lenardo; F. Kaplan

2017. Premiere Annual Conference of the International Alliance of Digital Humanities Organizations (DH 2017) , Montreal, Canada , August 8-11, 2017.

Cadaster plans are cornerstones for reconstructing dense representations of the history of the city. They provide information about the city urban shape, enabling to reconstruct footprints of most important urban components as well as information about the urban population and city functions. However, as some of these handwritten documents are more than 200 years old, the establishment of processing pipeline for interpreting them remains extremely challenging. We present the first implementation of a fully automated process capable of segmenting and interpreting Napoleonic Cadaster Maps of the Veneto Region dating from the beginning of the 19th century. Our system extracts the geometry of each of the drawn parcels, classifies, reads and interprets the handwritten labels.

*

Analyse multi-échelle de n-grammes sur 200 années d'archives de presse

V. Buntinx / F. Kaplan; A. Xanthos (Dir.)

Lausanne, EPFL, 2017.

DOI : 10.5075/epfl-thesis-8180.

The recent availability of large corpora of digitized texts over several centuries opens the way to new forms of studies on the evolution of languages. In this thesis, we study a corpus of 4 million press articles covering a period of 200 years. The thesis tries to measure the evolution of written French on this period at the level of words and expressions, but also in a more global way by attempting to define integrated measures of linguistic evolution. The methodological choice is to introduce a minimum of linguistic hypotheses in this study by developing new measures around the simple notion of n-gram, a sequence of n consecutive words. The thesis explores on this basis the potential of already known concepts as temporal frequency profiles and their diachronic correlations, but also introduces new abstractions such as the notion of resilient linguistic kernel or the decomposition of profiles into solidified expressions according to simple statistical models. Through the use of distributed computational techniques, it develops methods to test the relevance of these concepts on a large amount of textual data and thus allows to propose a virtual observatory of the diachronic evolutions associated with a given corpus. On this basis, the thesis explores more precisely the multi-scale dimension of linguistic phenomena by considering how standardized measures evolve when applied to increasingly long n-grams. The discrete and continuous scale from the isolated entities (n=1) to the increasingly complex and structured expressions (1 < n < 10) offers a transversal axis of study to the classical differentiations that ordinarily structure linguistics: syntax, semantics, pragmatics, and so on. The thesis explores the quantitative and qualitative diversity of phenomena at these different scales of language and develops a novel approach by proposing multi-scale measurements and formalizations, with the aim of characterizing more fundamental structural aspects of the studied phenomena.

*

A Simple Set of Rules for Characters and Place Recognition in French Novels

C. Bornet; F. Kaplan

Frontiers in Digital Humanities. 2017.

DOI : 10.3389/fdigh.2017.00006.

*

Big Data of the Past

F. Kaplan; I. di Lenardo

Frontiers in Digital Humanities. 2017.

DOI : 10.3389/fdigh.2017.00012.

Big Data is not a new phenomenon. History is punctuated by regimes of data acceleration, characterized by feelings of information overload accompanied by periods of social transformation and the invention of new technologies. During these moments, private organizations, administrative powers, and sometimes isolated individuals have produced important datasets, organized following a logic that is often subsequently superseded but was at the time, nevertheless, coherent. To be translated into relevant sources of information about our past, these document series need to be redocumented using contemporary paradigms. The intellectual, methodological, and technological challenges linked to this translation process are the central subject of this article.

*

Narrative Recomposition in the Context of Digital Reading

C. A. M. Bornet / F. Kaplan (Dir.)

Lausanne, EPFL, 2017.

DOI : 10.5075/epfl-thesis-7592.

In any creative process, the tools one uses have an immediate influence on the shape of the final artwork. However, while the digital revolution has redefined core values in most creative domains over the last few decades, its impact on literature remains limited. This thesis explores the relevance of digital tools for several aspects of novels writing by focusing on two research questions: Is it possible for an author to edit better novels out of already published ones, given the access to adapted tools? And, will authors change their way of writing when they know how they are being read? This thesis is a multidisciplinary participatory study, actively involving the Swiss novelist Daniel de Roulet, to construct measures, visualizations, and digital tools aimed at leveraging the process of dynamic reordering of narrative material, similar to how one edits a video footage. We developed and tested various text analysis and visualization tools, the results of which were interpreted and used by the author to recompose a family saga out of material he has been writing for twenty-four years. Based on this research, we released Saga+, an online editing, publishing, and reading tool. The platform was handed out to third parties to improve existing writings, making new novels available to the public as a result. While many researchers have studied the structuration of texts either through global statistical features or micro-syntactic analyses, we demonstrate that by allowing visualization and interaction at an intermediary level of organisation, authors can manipulate their own texts in agile ways. By integrating readers’ traces into this newly revealed structure, authors can start to approach the question of optimizing their writing processes in ways that are similar to what is being practiced in other media industries. The introduction of tools for optimal composition opens new avenues for authors, as well as a controversial debate regarding the future of literature.

*

Optimized scripting in Massive Open Online Courses

F. Kaplan; I. di Lenardo

Dariah Teach, Université de Lausanne, Switzerland, March 23-24, 2017.

The Time Machine MOOC, currently under preparation, is designed to provide the necessary knowledge for students to use the editing tool of the Time Machine platform. The first test case of the platform in centered on our current work on the City of Venice and its archives. Small Teaching modules focus on specific skills of increasing difficulty: segmenting a word on a page, transcribing a word from a document series, georeferencing ancient maps using homologous points, disambiguating named entities, redrawing urban structures, finding matching details between paintings and writing scripts that perform automatically some of these tasks. Other skills include actions in the physical world, like scanning pages, books, maps or performing a photogrammetric reconstruction of a sculpture taking a large number of pictures. Eventually, some other modules are dedicated to general historic, linguistic, technical or archival knowledge that constitute prerequisites for mastering specific tasks. A general dependency graph has been designed, specifying in which order the skills can be acquired. The performance of most tasks can be tested using some pre-defined exercises and evaluation metrics, which allows for a precise evaluation of the level of mastery of each student. When the student successfully passes the test related to a skill, he or she gets the credentials to use that specific tool in the platform and starts contributing. However, the teaching options can vary greatly for each skill. Building upon the script concept developed by Dillenbourg and colleagues, we designed each tutorial as a parameterized sequence. A simple gradient descent method is used to progressively optimize the parameters in order to maximize the success rate of the students at the skill tests and therefore seek a form of optimality among the various design choices for the teaching methods. Thus, the more students use the platform, the more efficient teaching scripts become.

*

The references of references: a method to enrich humanities library catalogs with citation data

G. Colavizza; M. Romanello; F. Kaplan

International Journal on Digital Libraries. 2017.

DOI : 10.1007/s00799-017-0210-1.

The advent of large-scale citation indexes has greatly impacted the retrieval of scientific information in several domains of research. The humanities have largely remained outside of this shift, despite their increasing reliance on digital means for information seeking. Given that publications in the humanities have a longer than average life-span, mainly due to the importance of monographs for the field, this article proposes to use domain-specific reference monographs to bootstrap the enrichment of library catalogs with citation data. Reference monographs are works considered to be of particular importance in a research library setting, and likely to possess characteristic citation patterns. The article shows how to select a corpus of reference monographs, and proposes a pipeline to extract the network of publications they refer to. Results using a set of reference monographs in the domain of the history of Venice show that only 7% of extracted citations are made to publications already within the initial seed. Furthermore, the resulting citation network suggests the presence of a core set of works in the domain, cited more frequently than average.

*

Studying Linguistic Changes over 200 Years of Newspapers through Resilient Words Analysis

V. Buntinx; C. Bornet; F. Kaplan

Frontiers in Digital Humanities. 2017.

DOI : 10.3389/fdigh.2017.00002.

This paper presents a methodology to analyze linguistic changes in a given textual corpus allowing to overcome two common problems related to corpus linguistics studies. One of these issues is the monotonic increase of the corpus size with time, and the other one is the presence of noise in the textual data. In addition, our method allows to better target the linguistic evolution of the corpus, instead of other aspects like noise fluctuation or topics evolution. A corpus formed by two newspapers “La Gazette de Lausanne” and “Le Journal de Genève” is used, providing 4 million articles from 200 years of archives. We first perform some classical measurements on this corpus in order to provide indicators and visualizations of linguistic evolution. We then define the concept of a lexical kernel and word resilience, to face the two challenges of noises and corpus size fluctuations. This paper ends with a discussion based on the comparison of results from linguistic change analysis and concludes with possible future works continuing in that direction.

2016

*

From Documents to Structured Data: First Milestones of the Garzoni Project

M. Ehrmann; G. Colavizza; O. Topalov; R. Cella; D. Drago et al.

DHCommons. 2016.

Led by an interdisciplinary consortium, the Garzoni project undertakes the study of apprenticeship, work and society in early modern Venice by focusing on a specific archival source, namely the Accordi dei Garzoni from the Venetian State Archives. The project revolves around two main phases with, in the first instance, the design and the development of tools to extract and render information contained in the documents (according to Semantic Web standards) and, as a second step, the examination of such information. This paper outlines the main progress and achievements during the first year of the project.

*

Ancient administrative handwritten documents: virtual x-ray reading

F. Albertin; G. Margaritondo; F. Kaplan

2016.

Patent number(s) :
WO2015189817
WO2015189817

A method for detecting ink writings in a specimen comprising stacked pages, allowing a page-by-page reading without turning pages The method compris- es steps of taking a set of projection x-ray images for different positions of the specimen with respect to an x-ray source and a detector from an apparatus for taking projection x-ray images; storing the set of projection x-ray images in a suitable computer system; and processing the set of projection x-ray images to tomographically reconstruct the shape of the specimen.

*

Rendre le passé présent

F. Kaplan

Forum des 100, Université de Lausanne, Switzerland, Mai, 2016.

La conception d’un espace à quatre dimensions, dont la navigation agile, permet de réintroduire une continuité fluide entre le présent et le passé, s’inscrit dans l’ancien rêve philosophico-technologique de la machine à remonter le temps. Le moment historique auquel nous sommes convié s’inscrit comme la continuité d’un long processus ou fiction, technologie, science et culture se mêlent. La machine à remonter le temps est cet horizon toujours discuté, progressivement approché, et, aujourd’hui peut-être pour la première fois atteignable.

*

La modélisation du temps dans les Digital Humanities

F. Kaplan

Regimes temporels et sciences historiques, Bern, October, 14, 2016.

Les interfaces numériques sont chaque jour optimisées pour proposer des navigations sans frictions dans les multiples dimensions du présent. C’est cette fluidité, caractéristique de ce nouveau rapport à l’enregistrement documentaire, que les Digital Humanities pourraient réussir à reintroduire dans l’exploration du passé. Un simple bouton devrait nous permettre de glisser d’une représentation du présent à la représentation du même référent, il y a 10, 100 ou 1000 ans. Idéalement, les interfaces permettant la navigation dans le temps devraient pouvoir offrir la même agilité d’action que celle nous permettent de zoomer et des zoomer sur des objets aussi larges et denses que le globe terrestre. La recherche textuelle, nouvelle porte d’entrée de la connaissance depuis le le XXIe siècle devrait pouvoir s’étendre avec la même simplicité aux contenus des documents du passé. La recherche visuelle, second grand moment de l’indexation du monde et dont les premiers résultats commencent à s’inviter sur la quotidienneté de nos pratiques numériques, pourrait être la clé de voute de l’accès aux milliards de documents qu’il nous faut maintenant rendre accessible sous format numérique. Pour rendre le passé présent, il faudrait le restructurer selon les logiques des structures de la société numérique. Que deviendrait le temps dans cette transformation ? Une simple nouvelle dimension de l’espace ? La réponse est peut-être plus subtile.

*

L’Europe doit construire la première Time Machine

F. Kaplan

2016.

Le projet Time Machine, en compétition dans la course pour les nouveaux FET Flagships, propose une infrastructure d’archivage et de calcul unique pour structurer, analyser et modéliser les données du passé, les réaligner sur le présent et permettre de se projeter vers l’avenir. Il est soutenu par 70 institutions provenant de 20 pays et par 13 programmes internationaux.

*

Visual Link Retrieval in a Database of Paintings

B. L. A. Seguin; C. Striolo; I. di Lenardo; F. Kaplan

2016. VISART Workshop, ECCV , Amsterdam , September, 2016.

DOI : 10.1007/978-3-319-46604-0_52.

This paper examines how far state-of-the-art machine vision algorithms can be used to retrieve common visual patterns shared by series of paintings. The research of such visual patterns, central to Art History Research, is challenging because of the diversity of similarity criteria that could relevantly demonstrate genealogical links. We design a methodology and a tool to annotate efficiently clusters of similar paintings and test various algorithms in a retrieval task. We show that pretrained convolutional neural network can perform better for this task than other machine vision methods aimed at photograph analysis. We also show that retrieval performance can be significantly improved by fine-tuning a network specifically for this task.

*

Diachronic Evaluation of NER Systems on Old Newspapers

M. Ehrmann; G. Colavizza; Y. Rochat; F. Kaplan

2016. 13th Conference on Natural Language Processing (KONVENS 2016)Conference on Natural Language Processing , Bochum, GermanyBochum, Germany , September 19-21, 2016September 19–21, 2016. p. 97-107.

In recent years, many cultural institutions have engaged in large-scale newspaper digitization projects and large amounts of historical texts are being acquired (via transcription or OCRization). Beyond document preservation, the next step consists in providing an enhanced access to the content of these digital resources. In this regard, the processing of units which act as referential anchors, namely named entities (NE), is of particular importance. Yet, the application of standard NE tools to historical texts faces several challenges and performances are often not as good as on contemporary documents. This paper investigates the performances of different NE recognition tools applied on old newspapers by conducting a diachronic evaluation over 7 time-series taken from the archives of Swiss newspaper Le Temps.

*

Wikipedia's Miracle

F. Kaplan; N. Nova

Lausanne: EPFL PRESS.

Wikipedia has become the principle gateway to knowledge on the web. The doubts about information quality and the rigor of its collective negotiation process during its first couple of years have proved unfounded. Whether this delights or horrifies us, Wikipedia has become part of our lives. Both flexible in its form and content, the online encyclopedia will continue to constitute one of the pillars of digital culture for decades to come. It is time to go beyond prejudices and to study its true nature and better understand the emergence of this “miracle.”

*

Le miracle Wikipédia

F. Kaplan; N. Nova

Lausanne: Presses Polytechniques et Universitaires Romandes.

Wikipédia s’est imposée comme la porte d’entrée principale de la connaissance sur le web. Les débats de ses premières années concernant la qualité des informations produites ou le bien-fondé du processus de négociation collective sont aujourd’hui dépassés. Que l’on s’en réjouisse ou qu’on le déplore, Wikipédia fait maintenant partie de notre vie. Flexible à la fois dans sa forme et dans ses contenus, l’encyclopédie en ligne continuera sans doute de constituer un des piliers de la culture numérique lors des prochaines décennies. Au-delà des préjugés, il s’agit maintenant d’étudier sa véritable nature et de comprendre à rebours comment un tel « miracle » a pu se produire.

*

La culture internet des mèmes

F. Kaplan; N. Nova

Lausanne: Presses Polytechniques et Universitaires Romandes.

Nous sommes à un moment de transition dans l’histoire des médias. Sur Internet, des millions de personnes produisent, altèrent et relaient des « mèmes », contenus numériques aux motifs stéréotypés. Cette « culture » offre un paysage nouveau, riche et complexe à étudier. Pour la première fois, un phénomène à la fois global et local, populaire et, d’une certaine manière, élitiste, construit, « médié » et structuré par la technique, peut être observé avec précision. Étudier les mèmes, c’est non seulement comprendre ce qu’est et sera peut-être la culture numérique, mais aussi inventer une nouvelle approche permettant de saisir la complexité des circulations de motifs à l’échelle mondiale.

*

Visual Patterns Discovery in Large Databases of Paintings

I. di Lenardo; B. L. A. Seguin; F. Kaplan

2016. Digital Humanities 2016 , Krakow, Polland , July 11-16, 2016.

The digitization of large databases of works of arts photographs opens new avenue for research in art history. For instance, collecting and analyzing painting representations beyond the relatively small number of commonly accessible works was previously extremely challenging. In the coming years,researchers are likely to have an easier access not only to representations of paintings from museums archives but also from private collections, fine arts auction houses, art historian However, the access to large online database is in itself not sufficient. There is a need for efficient search engines, capable of searching painting representations not only on the basis of textual metadata but also directly through visual queries. In this paper we explore how convolutional neural network descriptors can be used in combination with algebraic queries to express powerful search queries in the context of art history research.

*

Visualizing Complex Organizations with Data

D. Rodighiero

IC Research Day, Lausanne, Switzerland, June 30, 2016.

The Affinity Map is a project founded by the ENAC whose aim is to provide an instrument to understand organizations. The photograph shows the disclosure of the first map for the ENAC Research Day. The visualization was presented to scholars who are displayed in the representation itself.

*

Navigating through 200 years of historical newspapers

Y. Rochat; M. Ehrmann; V. Buntinx; C. Bornet; F. Kaplan

2016. iPRES 2016 , Bern , October 3-6, 2016.

This paper aims to describe and explain the processes behind the creation of a digital library composed of two Swiss newspapers, namely Gazette de Lausanne (1798-1998) and Journal de Genève (1826-1998), covering an almost two-century period. We developed a general purpose application giving access to this cultural heritage asset; a large variety of users (e.g. historians, journalists, linguists and the general public) can search through the content of around 4 million articles via an innovative interface. Moreover, users are offered different strategies to navigate through the collection: lexical and temporal lookup, n-gram viewer and named entities.

*

Studying Linguistic Changes on 200 Years of Newspapers

V. Buntinx; C. Bornet; F. Kaplan

2016. Digital Humanities 2016 , Kraków, Poland , July 11-16, 2016.

Large databases of scanned newspapers open new avenues for studying linguistic evolution. By studying a two-billion-word corpus corresponding to 200 years of newspapers, we compare several methods in order to assess how fast language is changing. After critically evaluating an initial set of methods for assessing textual distance between subsets corresponding to consecutive years, we introduce the notion of a lexical kernel, the set of unique words that maintain themselves over long periods of time. Focusing on linguistic stability instead of linguistic change allows building more robust measures to assess long term phenomena such as word resilience. By systematically comparing the results obtained on two subsets of the corpus corresponding to two independent newspapers, we argue that the results obtained are independent of the specificity of the chosen corpus, and are likely to be the results of more general linguistic phenomena.

*

The References of References: Enriching Library Catalogs via Domain-Specific Reference Mining

G. Colavizza; M. Romanello; F. Kaplan

2016. 3rd International Workshop on Bibliometric-enhanced Information Retrieval (BIR2016) , Padua, Italy , March 20-23, 2016. p. 32-43.

The advent of large-scale citation services has greatly impacted the retrieval of scientific information for several domains of research. The Humanities have largely remained outside of this shift despite their increasing reliance on digital means for information seeking. Given that publications in the Humanities probably have a longer than average life-span, mainly due to the importance of monographs in the field, we propose to use domain-specific reference monographs to bootstrap the enrichment of library catalogs with citation data. We exemplify our approach using a corpus of reference monographs on the history of Venice and extracting the network of publications they refer to. Preliminary results show that on average only 7% of extracted references are made to publications already within such corpus, therefore suggesting that reference monographs are effective hubs for the retrieval of further resources within the domain.

2015

*

S'affranchir des automatismes

B. Stiegler; F. Kaplan; D. Podalydès

Fabuleuses mutations, Cité des Sciences, December 8, 2015.

*

The Venice Time Machine

F. Kaplan

2015. ACM Symposium on Document Engineering , Lausanne, Switzerland , September 08 - 11, 2015.

The Venice Time Machine is an international scientific programme launched by the EPFL and the University Ca’Foscari of Venice with the generous support of the Fondation Lombard Odier. It aims at building a multidimensional model of Venice and its evolution covering a period of more than 1000 years. The project ambitions to reconstruct a large open access database that could be used for research and education. Thanks to a parternship with the Archivio di Stato in Venice, kilometers of archives are currently digitized, transcribed and indexed setting the base of the largest database ever created on Venetian documents. The State Archives of Venice contain a massive amount of hand-written documentation in languages evolving from medieval times to the 20th century. An estimated 80 km of shelves are filled with over a thousand years of administrative documents, from birth registrations, death certificates and tax statements, all the way to maps and urban planning designs. These documents are often very delicate and are occasionally in a fragile state of conservation. In complementary to these primary sources, the content of thousands of monographies have been indexed and made searchable.

*

Venice Time Machine : Recreating the density of the past

I. di Lenardo; F. Kaplan

2015. Digital Humanities 2015 , Sydney , June 29 - July 3, 2015.

This article discusses the methodology used in the Venice Time Machine project (http://vtm.epfl.ch) to reconstruct a historical geographical information system covering the social and urban evolution of Venice over a period of 1,000 years. Given the time span considered, the project used a combination of sources and a specific approach to align heterogeneous historical evidence into a single geographic database. The project is based on a mass digitization project of one of the largest archives in Venice, the Archivio di Stato. One goal of the project is to build a kind of ‘Google map’ of the past, presenting a hypothetical reconstruction of Venice in 2D and 3D for any year starting from the origins of the city to present-day Venice.

*

On Mining Citations to Primary and Secondary Sources in Historiography

G. Colavizza; F. Kaplan

2015. Clic-IT 2015 , Trento, Italy , December 3-4, 2015.

We present preliminary results from the Linked Books project, which aims at analysing citations from the histo- riography on Venice. A preliminary goal is to extract and parse citations from any location in the text, especially footnotes, both to primary and secondary sources. We detail a pipeline for these tasks based on a set of classifiers, and test it on the Archivio Veneto, a journal in the domain.

*

Text Line Detection and Transcription Alignment: A Case Study on the Statuti del Doge Tiepolo

F. Slimane; A. Mazzei; L. Tomasin; F. Kaplan

2015. Digital humanities , Sydney, Australia , June 29 - July 3, 2015.

In this paper, we propose a fully automatic system for the transcription alignment of historical documents. We introduce the ‘Statuti del Doge Tiepolo’ data that include images as well as transcription from the 14th century written in Gothic script. Our transcription alignment system is based on forced alignment technique and character Hidden Markov Models and is able to efficiently align complete document pages.

*

Anatomy of a Drop-Off Reading Curve

C. Bornet; F. Kaplan

2015. DH2015 , Sydney, Australia , June 29 - July 3, 2015.

Not all readers finish the book they start to read. Electronic media allow to us to measure more precisely how this “drop-off” effect unfolds as readers are reading a book. A curve showing how many people have read each chapter of a book is likely to be progressively going down as part of them interrupt their reading “journey”. This article is an initial study about the shape of these “drop­off” reading curves.

*

Inversed N-gram viewer: Searching the space of word temporal profiles

V. Buntinx; F. Kaplan

2015. Digital Humanities 2015 , Sydney, Australia , 29 June–3 July 2015.

*

Quelques réflexions préliminaires sur la Venice Time Machine

F. Kaplan

L'archive dans quinze ans; Louvain-la-Neuve: Academia, 2015. p. 161--179.

Encore aujourd’hui la plupart des historiens ont l’habitude de travailler en toutes petites équipes, se focalisant sur des problématiques très spécifiques. Ils n’échangent que très rarement leurs notes ou leurs données, percevant à tort ou à raison que leurs travaux de recherche préparatoire sont à la base de l’originalité de leurs travaux futurs. Prendre conscience de la dimension et la densité informationnelle des archives comme celle de Venise doit nous faire réaliser de l’impossibilité pour quelques historiens, travaillant de manière non coordonnée de couvrir avec une quelconque systématicité un objet aussi vaste. Si nous voulons tenter de transformer une archive de 80 kilomètres couvrant mille ans d’histoire en un système d’information structuré il nous faut développer un programme scientifique collaboratif, coordonné et massif. Nous sommes devant une entité informationnelle trop grande. Seule une collaboration scientifique internationale peut tenter d’en venir à bout.

*

A Map for Big Data Research in Digital Humanities

F. Kaplan

Frontiers in Digital Humanities. 2015.

DOI : 10.3389/fdigh.2015.00001.

This article is an attempt to represent Big Data research in digital humanities as a structured research field. A division in three concentric areas of study is presented. Challenges in the first circle – focusing on the processing and interpretations of large cultural datasets – can be organized linearly following the data processing pipeline. Challenges in the second circle – concerning digital culture at large – can be structured around the different relations linking massive datasets, large communities, collective discourses, global actors, and the software medium. Challenges in the third circle – dealing with the experience of big data – can be described within a continuous space of possible interfaces organized around three poles: immersion, abstraction, and language. By identifying research challenges in all these domains, the article illustrates how this initial cartography could be helpful to organize the exploration of the various dimensions of Big Data Digital Humanities research.

*

Mapping the Early Modern News Flow: An Enquiry by Robust Text Reuse Detection

G. Colavizza; M. Infelise; F. Kaplan

2015. HistoInformatics 2014 . p. 244-253.

DOI : 10.1007/978-3-319-15168-7_31.

Early modern printed gazettes relied on a system of news exchange and text reuse largely based on handwritten sources. The reconstruction of this information exchange system is possible by detecting reused texts. We present a method to individuate text borrowings within noisy OCRed texts from printed gazettes based on string kernels and local text alignment. We apply our methods on a corpus of Italian gazettes for the year 1648. Beside unveiling substantial overlaps in news sources, we are able to assess the editorial policy of different gazettes and account for a multi-faceted system of text reuse.

*

X-ray spectrometry and imaging for ancient administrative handwritten documents

F. Albertin; M. Stampanoni; E. Peccenini; Y. Hwu; F. Kaplan et al.

X-Ray Spectrometry. 2015.

DOI : 10.1002/xrs.2581.

‘Venice Time Machine’ is an international program whose objective is transforming the ‘Archivio di Stato’ – 80 km of archival records documenting every aspect of 1000 years of Venetian history – into an open-access digital information bank. Our study is part of this project: We are exploring new, faster, and safer ways to digitalize manuscripts, without opening them, using X-ray tomography. A fundamental issue is the chemistry of the inks used for administrative documents: Contrary to pieces of high artistic or historical value, for such items, the composition is scarcely documented. We used X-ray fluorescence to investigate the inks of four Italian ordinary handwritten documents from the 15th to the 17th century. The results were correlated to X-ray images acquired with different techniques. In most cases, iron detected in the ‘iron gall’ inks produces image absorption contrast suitable for tomography reconstruction, allowing computer extraction of handwriting information from sets of projections. When absorption is too low, differential phase contrast imaging can reveal the characters from the substrate morphology

*

Ancient administrative handwritten documents: X-ray analysis and imaging

F. Albertin; A. Astoflo; E. Peccenini; Y. Hwu; F. Kaplan et al.

Journal of Synchrotron Radiation. 2015.

DOI : 10.1107/S1600577515000314.

Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page `reading'. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project

*

Il pleut des chats et des chiens: Google et l'impérialisme linguistique

F. Kaplan; D. Kianfar

Le monde diplomatique. 2015.

Au début du mois de décembre dernier, quiconque demandait à Google Traduction l’équivalent italien de l’expression « Cette fille est jolie » obtenait une proposition étrange : Questa ragazza è abbastanza, littéralement « Cette fille est assez ». La beauté s’était lost in translation — perdue en cours de traduction. Comment un des traducteurs automatiques les plus performants du monde, fort d’un capital linguistique unique constitué de milliards de phrases, peut-il commettre une erreur aussi grossière ? La réponse est simple : il passe par l’anglais. « Jolie » peut se traduire par pretty, qui signifie à la fois « joli » et « assez ». Le second sens correspond à l’italien abbastanza.

2014

*

L'historien et l'algorithme

F. Kaplan; M. Fournier; M.-A. Nuessli

Le Temps des Humanités Digitales; FYP Editions, 2014. p. 49--63.

Les relations houleuses qu’histoire et informatique entretiennent ne sont pas nouvelles et la révolution des sciences historiques annoncée depuis plusieurs décennies continue de se faire attendre. Dans ce chapitre, nous aimerions néanmoins tenter de montrer qu’une évolution inédite est aujourd’hui à l’oeuvre dans les sciences historiques et que cette transformation est différente de celle qui a caractérisé, il y a quelques décennies l’arrivée de la « cliométrie » et des méthodes quantitatives. Notre hypothèse est que nous assistons par les effets de deux processus complémentaires à une généralisation des algorithmes comme objets médiateurs de la connaissance historique.

*

X-ray Spectrometry and imaging for ancient handwritten document

F. Albertin; A. Astolfo; M. Stampanoni; E. Peccenini; Y. Hwu et al.

2014. European Conference on X-Ray Spectrometry, EXRS2014 , Bologna .

We detected handwritten characters in ancient documents from several centuries with different synchrotron x-ray imaging techniques. The results were correlated to those of x-ray fluorescence analysis. In most cases, heavy elements produced high image quality suitable for tomography reconstruction leading to virtual page-by-page “reading”. When absorption is too low, differential phase contrast (DPC) imaging can reveal the characters from the substrate morphology. This paves the way to new strategies for information harvesting during mass digitization programs. This study is part of the Venice Time Machine project, an international research program aiming at transforming the immense venetian archival records into an open access digital information system. The Archivio di Stato in Venice holds about 80 kms of archival records documenting every aspects of a 1000 years of Venetian history. A large part of these records take the form of ancient bounded registers that can only be digitize through cautious manual operations. Each page must be turned manually in order to be photographed. Our project explore new ways to virtually “read” manuscripts, without opening them,. We specifically plan to use x-ray tomography to computer-extract page-by-page information from sets of projection images. The raw data can be obtained without opening or manipulating the manuscripts, reducing the risk of damage and speeding up the process. The present tests demonstrate that the approach is feasible. Furthermore, they show that over a very long period of time the common recipes used in Europe for inks in “normal” handwritings - ship records, notary papers, commercial transactions, demographic accounts, etc. – very often produced a high concentration of heavy or medium-heavy elements such as Fe, Hg and Ca. This opens the way in general to x-ray analysis and imaging. Furthermore, it could lead to a better understanding of the deterioration mechanisms in the search for remedies. The most important among the results that we will present is tomographic reconstruction. We simulated books with stacks of manuscript fragments and obtained from sets of projection images individual views -- that correspond indeed to a virtual page-by-page “reading” without opening the volume.

*

Virtual X-ray Reading (VXR) of Ancient Administrative Handwritten Documents

F. Albertin; A. Astolfo; M. Stampanoni; E. Peccenini; Y. Hwu et al.

2014. Synchrotron Radiation in Art and Archaeology, SR2A 14 .

The study of ancient documents is too often confined to specimens of high artistic value or to official writings. Yet, a wealth of information is often stored in administrative records such as ship records, notary papers, work contract, tax declaration, commercial transactions or demographic accounts. One of the best examples is the Venice Time Machine project that targets a massive digitization and information extraction program of Venetian archives. The Archivio di Stato in Venice holds about 80 kms of archival documents spanning over ten centuries and documenting every aspect of Venetian Mediterranean Empire. If unlocked and transformed in a digital information system, this information could change significantly our understanding of European history. We are exploring new ways to facilitate and speed up this broad task, exploiting x-ray techniques, notably those based on synchrotron light. . Specifically, we plan to use x-ray tomography to computer-extract page-by-page information from sets of projection images. The raw data can be obtained without opening or manipulating the bounded administrative registers, reducing the risk of damage and accelerating the process. We present here positive tests of this approach. First, we systematically analyzed the ink composition of a sample of Italian handwritings spanning over several centuries. Then, we performed x-ray imaging with different contrast mechanisms (absorption, scattering and refraction) using the differential phase contrast (DPC) mode of the TOMCAT beamline of the Swiss Light Source (SLS). Finally, we selected cases of high contrast to perform tomographic reconstruction and demonstrate page-by-page handwriting recognition. The experiments concerned both black inks from different centuries and red ink from the 15th century. For the majority of the specimens, we found in the ink areas heavy or medium-heavy elements such as Fe, Ca, Hg, Cu and Zn. This eliminates a major question about our approach, since the documentation on the nature of inks for ancient administrative records is quite scarce. As a byproduct, the approach can produce valuable information on the ink-substrate interaction with the objective to understand and prevent corrosion and deterioration.

*

La simulation humaine : le roman-fleuve comme terrain d'expérimentation narrative

C. Bornet; D. de Roulet; F. Kaplan

Cahiers de Narratologie. 2014.

Dans cet article nous présentons la démarche et les premiers résultats d’une recherche participative menée conjointement par le laboratoire d’humanités digitales de l’EPFL (DHLAB) et l’écrivain suisse Daniel de Roulet. Dans cette étude, nous explorons les façons dont la lecture numérique est susceptible d’influencer la façon d’écrire et de réorganiser des récits complexes, de type roman-fleuve ou saga. Nous exposons également nos premières conclusions ainsi que les possibles travaux futures, dans ce domaine très vaste et peu étudié à ce jour.

*

Character Networks and Centrality

Y. Rochat / H. Volken; F. Kaplan (Dir.)

University of Lausanne, 2014.

A character network represents relations between characters from a text; the relations are based on text proximity, shared scenes/events, quoted speech, etc. Our project sketches a theoretical framework for character network analysis, bringing together narratology, both close and distant reading approaches, and social network analysis. It is in line with recent attempts to automatise the extraction of literary social networks (Elson, 2012; Sack, 2013) and other studies stressing the importance of character- systems (Woloch, 2003; Moretti, 2011). The method we use to build the network is direct and simple. First, we extract co-occurrences from a book index, without the need for text analysis. We then describe the narrative roles of the characters, which we deduce from their respective positions in the network, i.e. the discourse. As a case study, we use the autobiographical novel Les Confessions by Jean-Jacques Rousseau. We start by identifying co-occurrences of characters in the book index of our edition (Slatkine, 2012). Subsequently, we compute four types of centrality: degree, closeness, betweenness, eigenvector. We then use these measures to propose a typology of narrative roles for the characters. We show that the two parts of Les Confessions, written years apart, are structured around mirroring central figures that bear similar centrality scores. The first part revolves around the mentor of Rousseau; a figure of openness. The second part centres on a group of schemers, depicting a period of deep paranoia. We also highlight characters with intermediary roles: they provide narrative links between the societies in the life of the author. The method we detail in this complete case study of character network analysis can be applied to any work documented by an index.

*

Encoding metaknowledge for historical databases

M.-A. Nuessli; F. Kaplan

2014. Digital Humanities 2014 , Lausanne, Switzerland , July 7-12, 2014. p. 288-289.

Historical knowledge is fundamentally uncertain. A given account of an historical event is typically based on a series of sources and on sequences of interpretation and reasoning based on these sources. Generally, the product of this historical research takes the form of a synthesis, like a narrative or a map, but does not give a precise account of the intellectual process that led to this result. Our project consists of developing a methodology, based on semantic web technologies, to encode historical knowledge, while documenting, in detail, the intellectual sequences linking the historical sources with a given encoding, also know as paradata. More generally, the aim of this methodology is to build systems capable of representing multiple historical realities, as they are used to document the underlying processes in the construction of possible knowledge spaces.

*

La question de la langue à l'époque de Google

F. Kaplan

Digital Studies Organologie des savoirs et technologies de la connaissance; Limoge: Fyp, 2014.

En 2012, Google a réalisé un chiffre d’affaires de 50 milliards de dollars un résultat financier impressionnant pour une entreprise créée il y a seulement une quinzaine d’années. 50 milliards de dollars représentent 140 millions de dollars par jour, 5 millions de dollars par heure. Si vous lisez ce chapitre en une dizaine de minutes, Google aura, entre temps, réalisé presque un million de dollars de revenu. Que vend Google pour réaliser des performances financières si impressionnantes ? Google vend des mots, des millions de mots.

*

Fantasmagories au musée

F. Kaplan

Alliage. 2014.

L'utilisation de plus en plus prégnante des nouvelles technologies dans les musées et bibliothèques (tablettes tactiles, audioguides, écrans interactifs, etc.) diviserait les publics entre ceux qui recherchent la compréhension et ceux pour qui prime l'émotion. Comment alors concilier expérience collective partagée et dispositifs techniques ? Comment des cartels virtuels flottant dans les airs peuvent devenir des "fantasmagories didactiques" ? Retour d'expérience muséographique de réalité mixte autour de l'utilisation de vitrines virtuelles "holographiques".

*

A Preparatory Analysis of Peer-Grading for a Digital Humanities MOOC

F. Kaplan; C. Bornet

2014. Digital Humanities 2014 , Lausanne , 7-12 July. p. 227-229.

Over the last two years, Massive Open Online Classes (MOOCs) have been unexpectedly successful in convincing large number of students to pursue online courses in a variety of domains. Contrary to the "learn anytime anywhere" moto, this new generation of courses are based on regular assignments that must be completed and corrected on a fixed schedule. Successful courses attracted about 50 000 students in the first week but typically stabilised around 10 000 in the following weeks, as most courses demand significant involvement. With 10 000 students, grading is obviously an issue, and the first successful courses tended to be technical, typically in computer science, where various options for automatic grading system could be envisioned. However, this posed a challenge for humanities courses. The solution that has been investigated for dealing with this issue is peer-grading: having students grade the work of one another. The intuition that this would work was based on some older results showing high correlation between professor grading, peer-grading and self-grading. The generality of this correlation can reasonably be questioned. There is a high chance that peer-grading works for certain domains, or for certain assignment, but not for others. Ideally this should be tested experimentally before launching any large-scale courses. EPFL is one of the first European schools to experiment with MOOCs in various domains. Since the launch of these first courses, preparing an introductory MOOC on Digital Humanities was one of our top priorities. However, we felt it was important to first validate the kind of peer-grading strategy we were planning to implement on a smaller set of students, to determine if it would actually work for the assignments we envisioned. This motivated the present study which was conducted during the first semester of our masters level introductory course on Digital Humanities at EPFL.

*

Linguistic Capitalism and Algorithmic Mediation

F. Kaplan

Representations. 2014.

Google’s highly successful business model is based on selling words that appear in search queries. Organizing several million auctions per minute, the company has created the first global linguistic market and demonstrated that linguistic capitalism is a lucrative business domain, one in which billions of dollars can be realized per year. Google’s services need to be interpreted from this perspective. This article argues that linguistic capitalism implies not an economy of attention but an economy of expression. As several million users worldwide daily express themselves through one of Google’s interfaces, the texts they produce are systematically mediated by algorithms. In this new context, natural languages could progressively evolve to seamlessly integrate the linguistic biases of algorithms and the economical constraints of the global linguistic economy.

*

Analyse des réseaux de personnages dans Les Confessions de Jean-Jacques Rousseau

Y. Rochat; F. Kaplan

Les Cahiers du Numérique. 2014.

DOI : 10.3166/LCN.10.3.109‐133.

Cet article étudie le concept de centralité dans les réseaux de personnages apparaissant dans Les Confessions de Jean-Jacques Rousseau. Notre objectif est ainsi de caractériser certains aspects des rôles des personnages du récit sur la base de leurs cooccurrences dans le texte. We sketch a theoretical framework for literary network analysis, bringing together narratology, distant reading and social network analysis. We extract co-occurrences from a book index without the need for text analysis and describe the narrative roles of the characters. As a case study, we use the autobiographical novel Les Confessions from Jean-Jacques Rousseau. Eventually, we compute four types of centrality — degree, closeness, betweenness, eigenvector — and use these measures to propose a typology of narrative roles for the characters.

*

A Network Analysis Approach of the Venetian Incanto System

Y. Rochat; M. Fournier; A. Mazzei; F. Kaplan

2014. Digital Humanities 2014 , Lausanne , July 7-12, 2014.

The objective of this paper was to perform new analyses about the structure and evolution of the Incanto system. The hypothesis was to go beyond the textual narrative or even cartographic representation thanks to network analysis, which could potentially offer a new perspective to understand this maritime system.

*

Character networks in Les Confessions from Jean-Jacques Rousseau

Y. Rochat; F. Kaplan

2014. Texas Digital Humanities Conference , Houston, Texas, USA , April 10-12, 2014.

*

Semi-Automatic Transcription Tool for Ancient Manuscripts

M. M. J.-A. Simeoni

IC Research Day 2014: Challenges in Big Data, SwissTech Convention Center, Lausanne, Switzerland, June 12, 2014.

In this work, we investigate various techniques from the fields of shape analysis and image processing in order to construct a semi-automatic transcription tool for ancient manuscripts. First, we design a shape matching procedure using shape contexts, introduced in [1], and exploit this procedure to compute different distances between two arbitrary shapes/words. Then, we use Fischer discrimination to combine these distances in a single similarity measure and use it to naturally represent the words on a similarity graph. Finally, we investigate an unsupervised clustering analysis on this graph to create groups of semantically similar words and propose an uncertainty measure associated with the attribution of one word to a group. The clusters together with the uncertainty measure form the core of the semi-automatic transcription tool, that we test on a dataset of 42 words. The average classification accuracy achieved with this technique on this dataset is of 86%, which is quiet satisfying. This tool allows to reduce the actual number of words we need to type to transcript a document of 70%.

*

Attentional Processes in Natural Reading: the Effect of Margin Annotations on Reading Behaviour and Comprehension

A. Mazzei; T. Koll; F. Kaplan; P. Dillenbourg

2014. ACM Symposium on Eye Tracking Research and Applications , Safety Harbor, USA , March 26-28, 2014.

We present an eye tracking study to investigate how natural reading behavior and reading comprehension are influenced by in-context annotations. In a lab experiment, three groups of participants were asked to read a text and answer comprehension questions: a control group without taking annotations, a second group reading and taking annotations, and a third group reading a peer-annotated version of the same text. A self-made head-mounted eye tracking system was specifically designed for this experiment, in order to study how learners read and quickly re-read annotated paper texts, in low constrained experimental conditions. In the analysis, we measured the phenomenon of annotation-induced overt attention shifts in reading, and found that: (1) the reader's attention shifts toward a margin annotation more often when the annotation lies in the early peripheral vision, and (2) the number of attention shifts, between two different types of information units, is positively related to comprehension performance in quick re-reading. These results can be translated into potential criteria for knowledge assessment systems.

*

3D Model-Based Gaze Estimation in Natural Reading: a Systematic Error Correction Procedure based on Annotated Texts

A. Mazzei; S. Eivazi; Y. Marko; F. Kaplan; P. Dillenbourg

2014. ACM Symposium on Eye Tracking Research and Applications , Safety Harbor, USA , March 26-28, 2014.

Studying natural reading and its underlying attention processes requires devices that are able to provide precise measurements of gaze without rendering the reading activity unnatural. In this paper we propose an eye tracking system that can be used to conduct analyses of reading behavior in low constrained experimental settings. The system is designed for dual-camera-based head-mounted eye trackers and allows free head movements and note taking. The system is composed of three different modules. First, a 3D model-based gaze estimation method computes the reader’s gaze trajectory. Second, a document image retrieval algorithm is used to recognize document pages and extract annotations. Third, a systematic error correction procedure is used to post-calibrate the system parameters and compensate for spatial drifts. The validation results show that the proposed method is capable of extracting reliable gaze data when reading in low constrained experimental conditions.

2013

*

How to build an information time machine

F. Kaplan

TEDxCaFoscariU, Venice, Italy, June, 2013.

The Venice Time Machine project aims at building a multidimensional model of Venice and its evolution covering a period of more than 1000 years. Kilometers of archives are currently being digitized, transcribed and indexed setting the base of the largest database ever created on Venetian documents. Millions of photos are processed using machine vision algorithms and stored in a format adapted to high performance computing approaches. In addition to these primary sources, the content of thousands of monographs are indexed and made searchable. The information extracted from these diverse sources is organized in a semantic graph of linked data and unfolded in space and time as part of an historical geographical information system, based on high-resolution scanning of the city itself.

*

A social network analysis of Rousseau’s autobiography “Les Confessions”

Y. Rochat; F. Kaplan; C. Bornet

2013. Digital Humanities 2013 , Lincoln, Nebraska, USA , July 15-19, 2013.

We propose an analysis of the social network composed of the characters appearing in Jean-Jacques Rousseau's autobiographic Les Confessions, with existence of edges based on co-occurrences. This work consists of twelve volumes, that span over fifty years of his life. Having a unique author allows us to consider the book as a coherent work, unlike some of the historical texts from which networks often get extracted, and to compare the evolution of patterns of characters through the books on a common basis. Les Confessions, considered as one of the first modern autobiographies, has the originality to let us compose a social network close to the reality, only with a bias introduced by the author, that has to be taken into account during the analysis. Hence, with this paper, we discuss the interpretation of networks based on the content of a book as social networks. We also, in a digital humanities approach, discuss the relevance of this object as an historical source and a narrative tool.

*

Analyse de réseaux sur les Confessions de Rousseau

Y. Rochat; F. Kaplan

2013. Humanités délivrées , Lausanne, Switzerland , October 1-2, 2013.

*

Les "Big data" du passé

F. Kaplan

Bulletin SAGW. 2013.

Les sciences humaines sont sur le point de vivre un bouleversement comparable à celui qui a frappé la biologie dans les trente dernières années. Cette révolution consiste essentiellement en un changement d’échelle dans l’ambition et la taille des projets de recherche. Nous devons former une nouvelle génération de jeunes chercheurs préparés pour cette transformation.

*

Expanding Eye-Tracking Methods to Explain the Socio-Cognitive Effects of Shared Annotations

A. Mazzei / P. Dillenbourg; F. Kaplan (Dir.)

Lausanne, EPFL, 2013.

DOI : 10.5075/epfl-thesis-5917.

*

The practical confrontation of engineers with a new design endeavour: The case of digital humanties

F. Kaplan; D. Vinck

Engineering Practice in a Global Context; London, UK: CRC Press, 2013. p. 61-78.

This chapter shows some of the practices of engineers use when they are confronted to completely new situations, when they enter into an emerging field where methods and paradigms are not yet stabilized. Following the engineers here would help to shed light on their practices when they are confronted to new fields and new interlocutors. This is the case for engineers and computer scientists who engage themselves with human and social sciences to imagine, design, develop and implement digital humanities (DH) with specific hardware, software and infrastructure.

*

Le cercle vertueux de l'annotation

F. Kaplan

Le lecteur à l'oeuvre; Gollion, Suisse: Infolio, 2013. p. 57-68.

Annoter est bon pour la compréhension immédiate du lecteur. Lire des textes annotés permet de mieux les comprendre. Cette double pertinence de l’annotation, confirmée par l’expérience, peut expliquer son succès séculaire.

*

Dyadic pulsations as a signature of sustainability in correspondence networks

M. Aeschbach; P.-Y. Brandt; F. Kaplan

2013. Digital Humanities 2013 , Lincoln, Nebraska, USA , July 15-19, 2013.

*

Are Google’s linguistic prosthesis biased towards commercially more interesting expressions? A preliminary study on the linguistic effects of autocompletion algorithms.

A. Jobin; F. Kaplan

2013. Digital Humanities 2013 , Lincoln, Nebraska, USA , July 15-19, 2013. p. 245-248.

Google's linguistic prosthesis have become common mediators between our intended queries and their actual expressions. By correcting a mistyped word or extending a small string of letters into a statistically plausible continuation, Google offers a valuable service to users. However, Google might also be transforming a keyword with no or little value into a keyword for which bids are more likely. Since Google's word bidding algorithm accounts for most of the company's revenues, it is reasonable to ask whether linguistic prosthesis are biased towards commercially more interesting expressions. This study describes a method allowing for progressing in this understanding. Based on an optimal experiment design algorithm, we are reconstructing a model of Google's autocompletion and value assignment functions. We can then explore and question the various possible correlations between the two functions. This is a first step towards the larger goal of understanding how Google's linguistic economy impacts natural language.

*

Living With a Vacuum Cleaning Robot - A 6-month Ethnographic Study

J. Fink; V. Bauwens; F. Kaplan; P. Dillenbourg

International Journal of Social Robotics. 2013.

DOI : 10.1007/s12369-013-0190-2.

Little is known about the usage, adoption process and long-term effects of domestic service robots in people’s homes. We investigated the usage, acceptance and process of adoption of a vacuum cleaning robot in nine households by means of a six month ethnographic study. Our major goals were to explore how the robot was used and integrated into daily practices, whether it was adopted in a durable way, and how it impacted its environment. We studied people’s perception of the robot and how it evolved over time, kept track of daily routines, the usage patterns of cleaning tools, and social activities related to the robot. We integrated our results in an existing framework for domestic robot adoption and outlined similarities and differences to it. Finally, we identified several factors that promote or hinder the process of adopting a domestic service robot and make suggestions to further improve human-robot interactions and the design of functional home robots toward long-term acceptance.

2012

*

Interactive device and method for transmitting commands from a user

F. Kaplan

2012.

Patent number(s) :
US8126221
US2009208052
EP2090961

According to the present invention, it is provided an interactive device comprising a display, a camera, an image analyzing means, said interactive device comprising means to acquire an image with the camera, the analyzing means detecting at least a human face on the acquired image and displaying on the display at least a pattern where the human face was detected wherein the interactive device further comprises means to determine a halo region extending at least around the pattern and means to add into the halo region at least one interactive zone related to a command, means to detect movement onto the interactive zone and means to execute the command by said device.

*

L'ordinateur du XXIe siècle sera un robot

F. Kaplan

Et l'Homme créa le robot; Paris: Musée des Arts et Métiers / Somogy éditions d'Art, 2012.

*

La bibliothèque comme interface physique de découverte et lieu de curation collective

F. Kaplan

Documentaliste - Sciences de l'information. 2012.

Une bibliothèque est toujours un volume organisé en deux sous-espaces : une partie publique (front-end) avec laquelle les usages peuvent interagir, une partie cachée (back-end) utilisée pour la logistique et le stockage. À la Bibliothèque Nationale de France, c’est un système robotisé qui fait la jonction entre les espaces immenses et sous-terrains ouverts au public et les quatre tours qui stockent les livres. L’architecte Dominique Perrault a imaginé une vertigineuse bibliothèque-machine où la circulation des hommes a été pensée symétriquement à la circulation des livres ...

*

Can a Table Regulate Participation in Top Level Managers' Meetings?

F. Roman; S. Mastrogiacomo; D. Mlotkowski; F. Kaplan; P. Dillenbourg

2012. International Conference on Supporting Group Work GROUP'12 , Sanibel, Florida, USA , October 27-31, 2012.

We present a longitudinal study on the participation regulation effects in the presence of a speech aware interactive table. This study focuses on training meetings of groups of top level managers, whose compositions do not change, in a corporate organization. We show that an effect of balancing participation develops over time. We also report other emerging group-specific features such as interaction patterns and signatures, leadership effects, and behavioral changes between meetings. Finally we collect feedback from the participants and analyze qualitatively the human and social aspects of the participants interaction mediated by the technology.

*

Supporting opportunistic search in meetings with tangible tabletop

N. Li; F. Kaplan; O. Mubin; P. Dillenbourg

2012. the 2012 ACM annual conference extended abstracts , Austin, Texas, USA , 05-10 05 2012.

DOI : 10.1145/2212776.2223837.

Web searches are often needed in collocated meetings. Many research projects have been conducted for supporting collaborative search in information-seeking meetings, where searches are executed both intentionally and intensively. However, for most common meetings, Web searches may happen randomly with low-intensity. They neither serve as main tasks nor major activities. This kind of search can be referred to as opportunistic search. The area of opportunistic search in meetings has not yet been studied. Our research is based upon this motivation. We propose an augmented tangible tabletop system with a semi-ambient conversation-context-aware surface as well as foldable paper browsers for supporting opportunistic search in collocated meetings. In this paper, we present our design of the system and initial findings.

*

How books will become machines

F. Kaplan

Lire demain : Des manuscrits antiques à l’ère digitale; Lausanne: PPUR, 2012. p. 27-44.

This article is an attempt to reframe the evolution of books into a larger evolutionary theory. A central concept of this theory is the notion of regulated representation. A regulated representation is governed by a set of production and usage rules.ur core hypothesis is that regulated representations get more regular over time. The general process of this regulating tendency is the transformation of a convention into a mechanism. The regulation usually proceeds in two consecutive steps, firstly mechanizing the representation production rules and secondly its conventional usages. Ultimately, through this process, regulated representations tend to become machines.

*

Hands-on Symmetry with Augmented Reality on Paper

Q. Bonnard; A. Legge; F. Kaplan; P. Dillenbourg

2012. 9th International Conference on Hands-on Science , Antalya, Turkey , October 16 - 21, 2012.

Computers have been trying to make their way into education, because they can allow learners to manipulate abstract notions and explore problem spaces easily. However, even with the tremendous potential of computers in education, their integration into formal learning has had limited success. This may be due to the fact that computer interfaces completely rupture the existing tools and curricula. We propose paper interfaces as a solution. Paper interfaces can be manipulated and annotated yet still maintain the processing power and dynamic displays of computers. We focus on geometry, which allows us to fully harness these two interaction modalities: for example, cutting a complex paper shape into simpler forms shows how to compute an area. We use a camera-projector system to project information on pieces of paper detected with a 2D barcode. We developed and experimented with several activities based on this system for geometry learning, however, we focus our presentation on one activity addressing symmetry. This activity is based on a sheet where a part of its content is scanned, and then reprojected according to one or more symmetry axes. Such a sheet is used to illustrate, in real time, how a symmetric drawing is constructed. Anything in the input area can be reflected: ink paper shapes, or physical objects. We show how the augmented sheets provide an easy solution for teachers to develop their own augmented reality activities by reporting on the collaboration with three teachers. These teachers successfully used the activities in their classes, integrating them in the normal course of their teaching. We also relate how paper interfaces let pupils express their creativity while working on geometry.

*

Paper Interfaces to Support Pupils and Teachers in Geometry

Q. Bonnard; F. Kaplan; P. Dillenbourg

Digital Ecosystems for Collaborative Learning: Embedding Personal and Collaborative Devices to Support Classrooms of the Future (DECL). Workshop in the International Conference of the Learning Sciences (ICLS), Sydney, Australia, July 2.

*

Tangible Paper Interfaces: Interpreting Pupils' Manipulations

Q. Bonnard; P. Jermann; A. Legge; F. Kaplan; P. Dillenbourg

2012. Interactive Tabletops and Surfaces 2012 Conference , Cambridge, Massachusetts, USA , November 11-14 2012.

Paper interfaces merge the advantages of the digital and physical world. They can be created using normal paper augmented by a camera+projector system. They are particularly promising for applications in education, because paper is already fully integrated in the classroom, and computers can augment them with a dynamic display. However, people mostly use paper as a document, and rarely for its characteristics as a physical body. In this article, we show how the tangible nature of paper can be used to extract information about the learning activity. We present an augmented reality activity for pupils in primary schools to explore the classification of quadrilaterals based on sheets, cards, and cardboard shapes. We present a preliminary study and an in-situ, controlled study, making use of this activity. From the detected positions of the various interface elements, we show how to extract indicators about problem solving, hesitation, difficulty levels of the exercises, and the division of labor among the groups of pupils. Finally, we discuss how such indicators can be used, and how other interfaces can be designed to extract different indicators.

*

Paper Interfaces for Learning Geometry

Q. Bonnard; H. Verma; F. Kaplan; P. Dillenbourg

2012. 7th European Conference on Technology Enhanced Learning , Saarbrücken, Germany , September 18-21, 2012.

Paper interfaces offer tremendous possibilities for geometry education in primary schools. Existing computer interfaces designed to learn geometry do not consider the integration of conventional school tools, which form the part of the curriculum. Moreover, most of computer tools are designed specifically for individual learning, some propose group activities, but most disregard classroom-level learning, thus impeding their adoption. We present an augmented reality based tabletop system with interface elements made of paper that addresses these issues. It integrates conventional geometry tools seamlessly into the activity and it enables group and classroom-level learning. In order to evaluate our system, we conducted an exploratory user study based on three learning activities: classifying quadrilaterals, discovering the protractor and describing angles. We observed how paper interfaces can be easily adopted into the traditional classroom practices.

*

Anthropomorphic Language in Online Forums about Roomba, AIBO and the iPad

J. Fink; O. Mubin; F. Kaplan; P. Dillenbourg

2012. The IEEE International Workshop on Advanced Robotics and its Social Impacts (ARSO 2012) , Technische Universität München, Munich, Germany , May 21-23, 2012. p. 54-59.

DOI : 10.1109/ARSO.2012.6213399.

What encourages people to refer to a robot as if it was a living being? Is it because of the robot’s humanoid or animal-like shape, its movements or rather the kind of inter- action it enables? We aim to investigate robots’ characteristics that lead people to anthropomorphize it by comparing different kinds of robotic devices and contrasting it to an interactive technology. We addressed this question by comparing anthro- pomorphic language in online forums about the Roomba robotic vacuum cleaner, the AIBO robotic dog, and the iPad tablet computer. A content analysis of 750 postings was carried out. We expected to find the highest amount of anthropomorphism in the AIBO forum but were not sure about how far people referred to Roomba or the iPad as a lifelike artifact. Findings suggest that people anthropomorphize their robotic dog signifi- cantly more than their Roomba or iPad, across different topics of forum posts. Further, the topic of the post had a significant impact on anthropomorphic language.

2011

*

Métaphores machinales

F. Kaplan

L'Homme-machine et ses avatars. Entre science, philosophie et littérature - XVIIe-XXIe siècles; Vrin, 2011. p. 237-240.

Au fil des siècles, l'homme se voit comme une machine successivement hydropneumatique, mécanique, électrique et aujourd'hui numérique. Chaque nouvelle invention offre une nouvelle perspective sur le vivant sans jamais toutefois être complètement satisfaisante. Il reste toujours "quelque chose" qui semble difficilement réductible à un mécanisme et pour beaucoup ce quelque chose que nous ne voyons que par différence, fait le propre de l'homme.

*

From hardware and software to kernels and envelopes: a concept shift for robotics, developmental psychology, and brain sciences

F. Kaplan; P.-Y. Oudeyer

Neuromorphic and Brain-Based robots; Cambridge: Cambridge University Press, 2011. p. 217-250.

*

L'homme, l'animal et la machine : Perpétuelles redéfinitions

G. Chapouthier; F. Kaplan

CNRS Editions, Paris.

Les animaux ont-ils une conscience ? Les machines peuvent-elles se montrer intelligentes ? Chaque nouvelle découverte des biologistes, chaque progrès technologique nous invite à reconsidérer le propre de l’homme. Ce livre, fruit de la collaboration entre Georges Chapouthier, biologiste et philosophe de la biologie, et Frédéric Kaplan, ingénieur spécialiste de l’intelligence artificielle et des interfaces homme-machine, fait le point sur les multiples manières dont les animaux et les machines peuvent être comparés aux êtres humains. Après un panorama synthétique des capacités des animaux et des machines à apprendre, développer une conscience, ressentir douleur ou émotion, construire une culture ou une morale, les auteurs détaillent ce qui nous lie à nos alter-egos biologiques ou artificiels : attachement, sexualité, droit, hybridation. Au-delà, ils explorent des traits qui semblent spécifiquement humains – l’imaginaire, l’âme ou le sens du temps – mais pour combien de temps encore… Une exploration stimulante au coeur des mystères de la nature humaine, qui propose une redéfinition de l’homme dans son rapport à l’animal et à la machine.

*

HRI in the home: A Longitudinal Ethnographic Study with Roomba

J. Fink; V. Bauwens; O. Mubin; F. Kaplan; P. Dillenbourg

1st Symposium of the NCCR robotics, Zürich, Switzerland, June 16, 2011.

Personal service robots, such as the iRobot Roomba vacuum cleaner provide a promising opportunity to study human-robot interaction (HRI) in domestic environments. Still rather little is known about long-term impacts of robotic home appliances on people’s daily routines and attitudes and how they evolve over time. We investigate these aspects through a longitudinal ethnographic study with nine households, to which we gave a Roomba cleaning robot. During six months, data is gathered through a combination of qualitative and quantitative methods.

*

Roomba is not a Robot; AIBO is still Alive! Anthropomorphic Language in Online Forums

J. Fink; O. Mubin; F. Kaplan; P. Dillenbourg

3rd International Conference on Social Robotics, ICSR 2011, Amsterdam, The Netherlands, November 24-25, 2011.

Anthropomorphism describes people’s tendency to ascribe humanlike qualities to non-human artifacts, such as robots. We investigated anthropomorphic language in 750 posts of online forums about the Roomba robotic vacuum cleaner, the AIBO robotic dog and the iPad tablet computer. Results of this content analysis suggest a significant difference for anthropomorphic language usage among the three technologies. In contrast to Roomba and iPad, the specific characteristics of the robotic dog enhanced a more social interaction and lead people to use considerably more anthropomorphic language.

*

People's Perception of Domestic Service Robots: Same Household, Same Opinion?

J. Fink; V. Bauwens; O. Mubin; F. Kaplan; P. Dillenbourg

2011. 3rd International Conference on Social Robotics , Amsterdam, The Netherlands , November 24-25, 2011. p. 204-213.

DOI : 10.1007/978-3-642-25504-5.

The study presented in this paper examined people’s perception of domestic service robots by means of an ethnographic study. We investigated initial reactions of nine households who lived with a Roomba vacuum cleaner robot over a two week period. To explore people’s attitude and how it changed over time, we used a recurring questionnaire that was filled at three different times, integrated in 18 semi-structured qualitative interviews. Our findings suggest that being part of a specific household has an impact how each individual household member perceives the robot. We interpret that, even though individual experiences with the robot might differ from one other, a household shares a specific opinion about the robot. Moreover our findings also indicate that how people perceived Roomba did not change drastically over the two week period.

*

Classroom orchestration : The third circle of usability

P. Dillenbourg; G. Zufferey; H. S. Alavi; P. Jermann; L. H. S. Do et al.

2011. 9th International Conference on Computer Supported Collaborative Learning , Hong Kong, China , July 4-8, 2011. p. 510-517.

We analyze classroom orchestration as a question of usability in which the classroom is the user. Our experiments revealed design features that reduce the global orchestration load. According to our studies in vocational schools, paper-based interfaces have the potential of making educational workflows tangible, i.e. both visible and manipulable. Our studies in university classes converge on minimalism: they reveal the effectiveness o tools that make visible what is invisible but do not analyze, predict or decide for teachers. These studies revealed a third circle of usability. The first circle concerns individual usability (HCI). The second circle is about design for teams (CSCL/CSCW). The third circle raises design choices that impart visibility, reification and minimalism on classroom orchestration. The fact that a CSCL environment allows or not students to look at what the next team is doing (e.g. tabletops versus desktops) illustrates the third circle issues that are important for orchestration.

*

A 99 Dollar Head-Mounted Eye Tracker

Y. Marko; A. Mazzei; F. Kaplan; P. Dillenbourg

In F. Vitu, E. Castet, & L. Goffart (Eds.), Abstracts of the 16th European Conference on Eye Movements. Presented at the ECEM, Marseille., Marseille, August 21-25, 2011.

Head-mounted eye-trackers are powerful research tools to study attention processes in various contexts. Most existing commercial solutions are still very expensive, limiting the current use of this technology. We present a hardware design to build, at low cost, a camera-based head-mounted eye tracker using two cameras and one infrared LED. A Playstation Eye camera (PEye) is fixed on an eyeglasses frame and positioned under one eye to track its movements. The filter of the PEye is replaced by another one (Optolite 750nm) that blocks the visible light spectrum. The focal length of the PEye needs to be re-adjusted in order to obtain a sharp image of the eye. This is done by increasing the distance between the charge coupled device (CCD) and the lens by a few millimeters. One IR-LED (Osram SFH485P) is installed near the PEye lens to impose an artificial infrared lighting which produces the so-called "dark pupil effect". This is done while respecting the Minimum Safe Working Distance. We positioned a second camera on the front side of the eyeglasses frame. Preliminary applicative tests indicate an accuracy of approximately one degree of visual angle, which makes this tool relevant for many eye-tracking projects.

*

Producing and Reading Annotations on Paper Documents: a geometrical framework for eye-tracking studies

A. Mazzei; F. Kaplan; P. Dillenbourg

Symposium N°13: Interacting with electronic and mobile media: Oculomotor and cognitive effects. In F. Vitu, E. Castet, & L. Goffart (Eds.), Abstracts of the 16th European Conference on Eye Movements. Presented at the ECEM, Marseille., Marseille, August 21-25, 2011.

The printed textbook remains the primary medium for studying in educational systems. Learners use personal annotation strategies while reading. These practices play an important role in supporting working memory, enhancing recall and influencing attentional processes. To be able to study these cognitive mechanisms we have designed and built a lightweight head mounted eye-tracker. Contrary to many eye trackers that require the readers head to stay still, our system permits complete freedom of movement and thus enables to study reading behaviors as if they were performed in everyday life. To accomplish this task we developed a geometrical framework to determine the localization of the gaze on a flattened document page. The eye tracker embeds a dual camera system which synchronously records the reader's eye movements and the paper document. The framework post-processes these two video streams. Firstly it performs a monocular 3D-tracking of the human eyeball to infer a plausible 3d gaze trajectory. Secondly it applies a feature point based method to recognize the document page and estimate its planar pose robustly. Finally it disambiguates their relative position optimizing the system parameters. Preliminary tests show that the proposed method is accurate enough to obtain reliable fixations on textual elements.

*

Paper Interface Design for Classroom Orchestration

S. Cuendet; Q. Bonnard; F. Kaplan; P. Dillenbourg

CHI, Vancouver, BC, Canada, May 7-12, 2011.

Designing computer systems for educational purpose is a difficult task. While many of them have been developed in the past, their use in classrooms is still scarce. We make the hypothesis that this is because those systems take into account the needs of individuals and groups, but ignore the requirements inherent in their use in a classroom. In this work, we present a computer system based on a paper and tangible interface that can be used at all three levels of interaction: individual, group, and classroom. We describe the current state of the interface design and why it is appropriate for classroom orchestration, both theoretically and through two examples for teaching geometry.

*

Cognitive and social effects of handwritten annotations

A. Mazzei; F. Kaplan; P. Dillenbourg

Red-conference, rethinking education in the knowledge society, Monte Verità, Switzerland, March 7-10, 2011.

This article first describes a method for extracting and classifying handwritten annotations on printed documents using a simple camera integrated in a lamp. The ambition of such a research is to offer a seamless integration of notes taken on printed paper in our daily interactions with digital documents. Existing studies propose a classification of annotations based on their form and function. We demonstrate a method for automating such a classification and report experimental results showing the classification accuracy. In the second part of the article we provide a road map for conducting user-centered studies using eye-tracking systems aiming to investigate the cognitive roles and social effects of annotations. Based on our understanding of some research questions arising from this experiment, in the last part of the article we describe a social learning environment that facilitates knowledge sharing across a class of students or a group of colleagues through shared annotations.