Skip to main content Skip to main navigation

Publication

Overview of the CLEF 2005 Multilingual Question Answering Track

Alessandro Vallin; Bernardo Magnini; Danilo Giampiccolo; Lili Aunimo; Christelle Ayache; Petya Osenova; Anselmo Peñas; Maarten de Rijke; Bogdan Sacaleanu; Diana Santos; Richard Sutcliffe
In: Carol Peters (Hrsg.). Accessing Multilingual Information Repositories. Springer-Verlag Berlin Heidelberg, 2006.

Abstract

The general aim of the third CLEF Multilingual Question Answering Track was to set up a common and replicable evaluation framework to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages. Nine target languages and ten source languages were exploited to enact 8 monolingual and 73 cross-language tasks. Twenty-four groups participated in the exercise. Overall results showed a general increase in performance in comparison to last year. The best performing monolingual system irrespective of target language answered 64.5% of the questions correctly (in the monolingual Portuguese task), while the average of the best performances for each target language was 42.6%. The cross-language step instead entailed a considerable drop in performance. In addition to accuracy, the organisers also measured the relation between the correctness of an answer and a system's stated confidence in it, showing that the best systems did not always provide the most reliable confidence score. We provide an overview of the 2005 QA track, detail the procedure followed to build the test sets and present a general analysis of the results.