نتایج جستجو برای: automated essay scoring

تعداد نتایج: 186447  

2011
Rod D. Roscoe Laura K. Varner Zhiqiang Cai Jennifer L. Weston Scott A. Crossley Danielle S. McNamara

Research on automated essay scoring (AES) indicates that computer-generated essay ratings are comparable to human ratings. However, despite investigations into the accuracy and reliability of AES scores, less attention has been paid to the feedback delivered to the students. This paper presents a method developers can use to quickly evaluate the usability of an automated feedback system prior t...

2006
Tao-Hsing Chang Chia-Hoang Lee Yu-Ming Chang

Chinese automated essay scoring (CAES) is a very important tool for many educational researches. However, none of the methods for retrieving features in English essays is applicable to Chinese writings because Chinese grammar parsers cannot produce reliable and useful syntactic features. CAES systems must explore other distinct features in Chinese writing. This paper proposes a method for retri...

Journal: :JSW 2014
Mingqing Zhang Shudong Hao Yanyan Xu Dengfeng Ke Hengli Peng

Writing has been increasingly regarded by the testers of language tests as an important indicator to assess the language skill of testees. As such tests become more and more popular and the number of testees becomes larger, it is a huge task to score so many essays by raters. So far, many methods have been used to solve this problem and the traditional method is Latent Semantic Analysis (LSA). ...

1999
Eleni Miltsakaki

In this paper we are concerned with the location of topics in text processing and the determination of the update unit in looking up topic continuations and topic shifts. Using key elements of the Centering Model of local discourse coherence and empirical evidence from Modern Greek and Japanese we argue that the appropriate update unit for topic tracking is the sentence in its traditional sense...

1999
Jill Burstein Martin Chodorow

The e-rater system 1 is an operational automated essay scoring system, developed at Educational Testing Service (ETS). The average agreement between human readers, and between independent human readers and e-rater is approximately 92%. There is much interest in the larger writing community in examining the system’s performance on nonnative speaker essays. This paper focuses on results of a stud...

2013
J. Olawande Daramola Ibukun Afolabi Ibidapo Akinyemi Olufunke O. Oladipupo

The procedure for the grading of students’ essays in subject-based examinations is quite challenging particularly when dealing with large number of students. Hence, several automatic essay-grading systems have been designed to alleviate the demands of manual subject grading. However, relatively few of the existing systems are able to give informative feedbacks that are based on elaborate domain...

2017
Andrea Horbach Dirk Scholten-Akoun Yuning Ding Torsten Zesch

Automatic essay scoring is nowadays successfully used even in high-stakes tests, but this is mainly limited to holistic scoring of learner essays. We present a new dataset of essays written by highly proficient German native speakers that is scored using a fine-grained rubric with the goal to provide detailed feedback. Our experiments with two state-of-the-art scoring systems (a neural and a SV...

2005
Lawrence M. Rudner Veronica Garcia

The Graduate Management Admission Council® (GMAC®) has long benefited from advances in automated essay scoring. When GMAC® adopted ETS® e-rater® in 1999, the Council’s flagship product, the Graduate Management Admission Test® (GMAT®), became the first large-scale assessment to incorporate automated essay scoring. The change was controversial at the time (Iowa State Daily, 1999; Calfee, 2000). T...

2006
Tsunenori Ishioka Masayuki Kameda

We have developed an automated Japanese essay scoring system called Jess. The system needs expert writings rather than expert raters to build the evaluation model. By detecting statistical outliers of predetermined aimed essay features compared with many professional writings for each prompt, our system can evaluate essays. The following three features are examined: (1) rhetoric — syntactic var...

2005
Hao-Chuan Wang Chun-Yen Chang Tsai-Yen Li

This paper describes an automated scorer for assessing students’ Creative Problem-Solving (CPS) abilities via modeling the intra-structure of students’ essays describing their thoughts on solving particular problems. The automated scorer aims to grade students’ open-ended responses to an essay-question-type CPS ability test, instead of using typical Likert-type or multiple-choice questions that...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید