نتایج جستجو برای: automated essay scoring
تعداد نتایج: 186447 فیلتر نتایج به سال:
Grading of written academic essays by humans requires significant effort. It is a time-consuming task and vulnerable to human biases. Ever since the introduction modern computing, this has been one many automations being explored. Researches in automated essay scoring have on-going, where majority researches recent years are based on extracting multiple linguistic features using them build clas...
This paper provides an overview of e-rater®, a state-of-the-art automated essay scoring system developed at the Educational Testing Service (ETS). E-rater is used as part of the operational scoring of two high-stakes graduate admissions programs: the GRE® General Test and the TOEFL iBT® assessments. E-rater is also used to provide score reporting and diagnostic feedback in Criterion SM , ETS’s ...
This paper presents an exploration into automated content scoring of non-native spontaneous speech using ontology-based information to enhance a vector space approach. We use content vector analysis as a baseline and evaluate the correlations between human rater proficiency scores and two cosine-similarity-based features, previously used in the context of automated essay scoring. We use two ont...
The availability of learner corpora, especially those which have been manually error-tagged or shallow-parsed, is still limited. This means that researchers do not have a common development and test set for natural language processing of learner English such as for grammatical error detection. Given this background, we created a novel learner corpus that was manually error-tagged and shallowpar...
The project aims to build an automated essay scoring system using a data set of ≈13000 essays from kaggle.com. These essays were divided into 8 di erent sets based on context. We extracted features such as total word count per essay, sentence count, number of long words, part of speech counts etc from the training set essays. We used a linear regression model to learn from these features and ge...
The automated scoring of second-language (L2) learner text along various writing dimensions is an increasingly active research area. In this paper, we focus on determining the topical relevance of an essay to the prompt that elicited it. Given the burden involved in manually assigning scores for use in training supervised prompt-relevance models, we develop unsupervised models and show that the...
Because there is no commonly accepted view of what makes for good writing, automated essay scoring (AES) ideally should be able to accommodate different theoretical positions, certainly at the level of state standards but also perhaps among teachers at the classroom level. This paper presents a practical approach and an interactive computer program for judgment-based customization. This approac...
This study investigates a novel approach to automatically assessing essay quality that combines natural language processing approaches that assess text features with approaches that assess individual differences in writers such as demographic information, standardized test scores, and survey results. The results demonstrate that combining text features and individual differences increases the a...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید