Reliability-Based Feature Weighting for Automated Essay Scoring
نویسندگان
چکیده
منابع مشابه
Automated Essay Scoring for Swedish
We present the first system developed for automated grading of high school essays written in Swedish. The system uses standard text quality indicators and is able to compare vocabulary and grammar to large reference corpora of blog posts and newspaper articles. The system is evaluated on a corpus of 1 702 essays, each graded independently by the student’s own teacher and also in a blind re-grad...
متن کاملBeyond Automated Essay Scoring
The ability to communicate in natural language has long been considered a defining characteristic of human intelligence. Furthermore, we hold our ability to express ideas in writing as a pinnacle of this uniquely human language facility—it defies formulaic or algorithmic specification. So it comes as no surprise that attempts to devise computer programs that evaluate writing are often met with ...
متن کاملAutomated Essay Scoring for Nonnative English Speakers
The e-rater system 1 , an automated essay scoring system, developed at Educational Testing Service (ETS), is currently being used to score essay responses on the Graduate Management Admissions Test (GMAT). The average agreement between human readers, and between independent human readers and e-rater is approximately 92%. There is much interest in the larger writing community in examining the sy...
متن کاملA Multilingual Application for Automated Essay Scoring
In this paper, we present a text evaluation system for students to improve Basque or Spanish writing skills. The system uses Natural Language Processing techniques to evaluate essays by detecting specific measures. The application uses a client-server architecture and both the interface and the application itself are multilingual. The article also explains how the system can be adapted to evalu...
متن کاملAutomated Essay Scoring For Nonnative English Speakers
The e-rater system 1 is an operational automated essay scoring system, developed at Educational Testing Service (ETS). The average agreement between human readers, and between independent human readers and e-rater is approximately 92%. There is much interest in the larger writing community in examining the system’s performance on nonnative speaker essays. This paper focuses on results of a stud...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Applied Psychological Measurement
سال: 2014
ISSN: 0146-6216,1552-3497
DOI: 10.1177/0146621614561630