نتایج جستجو برای: evaluation question

تعداد نتایج: 1037547  

2015
Yiyun Shou Michael Smithson

Evaluation of causal reasoning models depends on how well the subjects' causal beliefs are assessed. Elicitation of causal beliefs is determined by the experimental questions put to subjects. We examined the impact of question formats commonly used in causal reasoning research on participant's responses. The results of our experiment (Study 1) demonstrate that both the mean and homogeneity of t...

2007
Vasile Rus Zhiqiang Cai Arthur C. Graesser

Question Generation (QG) is proposed as a shared-task evaluation campaign for evaluating Natural Language Generation (NLG) research. QG is a subclass of NLG that plays an important role in learning environments, information seeking, and other applications. We describe a possible evaluation framework for standardized evaluation of QG that can be used for black-box evaluation, for finer-grained e...

2006
Hoa Trang Dang

The Document Understanding Conference (DUC) 2005 evaluation had a single useroriented, question-focused summarization task, which was to synthesize from a set of 25-50 documents a well-organized, fluent answer to a complex question. The evaluation shows that the best summarization systems have difficulty extracting relevant sentences in response to complex questions (as opposed to representativ...

2013
Jaspreet Kaur Vishal Gupta

Question Answering (QA) is a focused way of information retrieval. Question Answering system tries to get back the accurate answers to questions posed in natural language provided a set of documents. Basically question answering system (QA) has three elements i. e. question classification, information retrieval (IR), and answer extraction. These elements play a major role in Question Answering....

2006
Christelle Ayache Brigitte Grau Anne Vilnat

This paper describes the EQueR-EVALDA Evaluation Campaign, the French evaluation campaign of Question-Answering (QA) systems. The EQueR Evaluation Campaign included two tasks of automatic answer retrieval: the first one was a QA task over a heterogeneous collection of texts mainly newspaper articles, and the second one a specialised one in the Medical field over a corpus of medical texts. In to...

2004
Luís Costa

This paper starts by describing Esfinge, a general domain Portuguese question answering system that uses the redundancy available in the Web as an important resource to find its answers. The paper also presents the strategies employed to participate in CLEF-2004 and discusses the results obtained. Three different strategies were tested: searching the answers only in the CLEF document collection...

2008
Lori Lamel Sophie Rosset Christelle Ayache Djamel Mostefa Jordi Turmo Pere Comas

This paper reports on the QAST track of CLEF aiming to evaluate Question Answering on Speech Transcriptions. Accessing information in spoken documents provides additional challenges to those of text-based QA, needing to address the characteristics of spoken language, as well as errors in the case of automatic transcriptions of spontaneous speech. The framework and results of the pilot QAst eval...

2009
Anselmo Peñas Pamela Forner Richard F. E. Sutcliffe Álvaro Rodrigo Corina Forascu Iñaki Alegria Danilo Giampiccolo Nicolas Moreau Petya Osenova

This paper describes the first round of ResPubliQA, a Question Answering (QA) evaluation task over European legislation, proposed at the Cross Language Evaluation Forum (CLEF) 2009. The exercise consists of extracting a relevant paragraph of text that satisfies completely the information need expressed by a natural language question. The general goals of this exercise are (i) to study if the cu...

2010
Nicolas Moreau Olivier Hamon Djamel Mostefa Sophie Rosset Olivier Galibert Lori Lamel Jordi Turmo Pere Comas Paolo Rosso Davide Buscaldi Khalid Choukri

Question Answering (QA) technology aims at providing relevant answers to natural language questions. Most Question Answering research has focused on mining document collections containing written texts to answer written questions. In addition to written sources, a large (and growing) amount of potentially interesting information appears in spoken documents, such as broadcast news, speeches, sem...

2008
Rodney D. Nielsen

We propose a core task for question generation intended to maximize research activity and a subtask to identify the key concepts in a document for which questions should be generated. We discuss how these tasks are affected by the target application, discuss human evaluation techniques, and propose application-independent methods to automatically evaluate system performance.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید