نتایج جستجو برای: evaluation question

تعداد نتایج: 1037547  

2008
Álvaro Rodrigo Anselmo Peñas M. Felisa Verdejo

We follow the opinion that Question Answering (QA) performance can be improved by combining different systems. Thus, we planned an evaluation oriented to promote the specialization and further collaboration between QA systems. This multistream QA requires to develop the modules able to select the proper stream according to the question and the candidate answers provided. We describe here the ev...

2013
Pinaki Bhaskar Somnath Banerjee Partha Pakray Samadrita Banerjee Sivaji Bandyopadhyay Alexander F. Gelbukh

The article presents the experiments carried out as part of the participation in the main task (English dataset) of QA4MRE@CLEF 2013. In the developed system, we first combine the question Q and each candidate answer option A to form (Q , A) pair. Each pair has been considered a Hypothesis (H). We have used Morphological Expansion to rebuild the H. Then, each H has been verified by assigning a ...

2009
Sarra El Ayari Brigitte Grau

Evaluating complex systems is a complex task. Evaluation campaigns are organized each year to test different systems on global results, but they do not evaluate the relevance of the criteria used. Our purpose consist in modifying the intermediate results created by the components and inserting the new results into the process, without modifying the components. We will describe our framework of ...

2008
Richard Shaw Ben Solway Robert J. Gaizauskas Mark A. Greenwood

Having gold standards allows us to evaluate new methods and approaches against a common benchmark. In this paper we describe a set of gold standard question reformulations and associated reformulation guidelines that we have created to support research into automatic interpretation of questions in TREC question series, where questions may refer anaphorically to the target of the series or to an...

2010
Anselmo Peñas Pamela Forner Álvaro Rodrigo Richard F. E. Sutcliffe Corina Forascu Cristina Mota

This paper describes the second round of ResPubliQA, a Question Answering (QA) evaluation task over European legislation, a LAB of CLEF 2010. Two tasks have been proposed this year: Paragraph Selection (PS) and Answer Selection (AS). The PS task consisted of extracting a relevant paragraph of text that satisfies completely the information need expressed by a natural language question. In the AS...

Journal: :Language Resources and Evaluation 2012
Anselmo Peñas Bernardo Magnini Pamela Forner Richard F. E. Sutcliffe Álvaro Rodrigo Danilo Giampiccolo

The paper offers an overview of the key issues raised during the 8 years’ activity of the Multilingual Question Answering Track at the Cross Language Evaluation Forum (CLEF). The general aim of the track has been to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages, also drawing attention to a number of chal...

2010
Sarra El Ayari Brigitte Grau Anne-Laure Ligozat

Question answering systems are complex systems using natural language processing. Some evaluation campaigns are organized to evaluate such systems in order to propose a classification of systems based on final results (number of correct answers). Nevertheless, teams need to evaluate more precisely the results obtained by their systems if they want to do a diagnostic evaluation. There are no too...

2003
Tiphaine Dalmas Jochen L. Leidner Bonnie Webber Claire Grover Johan Bos

Recently, reading comprehension tests for students and adult language learners have received increased attention within the NLP community as a means to develop and evaluate robust question answering (NLQA) methods. We present our ongoing work on automatically creating richly annotated corpus resources for NLQA and on comparing automatic methods for answering questions against this data set. Sta...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید