نتایج جستجو برای: turk
تعداد نتایج: 2214 فیلتر نتایج به سال:
Expertise of annotators has a major role in crowdsourcing based opinion aggregation models. In such frameworks, accuracy and biasness of annotators are occasionally taken as important features and based on them priority of the annotators are assigned. But instead of relying on a single feature, multiple features can be considered and separate rankings can be produced to judge the annotators pro...
The past few years have seen an increasing interest in using Amazon’s Mechanical Turk for purposes of collecting data and performing annotation tasks. One such task is the mass evaluation of system output in a variety of tasks. In this paper, we present MAISE, a package that allows researchers to evaluate the output of their AI system(s) using human judgments collected via Amazon’s Mechanical T...
Although micro-task crowdwork was popularized by the Mechanical Turk (MTurk) labor-market platform, it is useful in many other contexts and communities as well. Unfortunately, several MTurk design choices, such as worker anonymity and isolation, are problematic in other environments. This paper introduces Frenzy, a platform for friendsourcing. Frenzy helps people who are closely connected in a ...
This paper describes a framework for evaluation of spoken dialogue systems. Typically, evaluation of dialogue systems is performed in a controlled test environment with carefully selected and instructed users. However, this approach is very demanding. An alternative is to recruit a large group of users who evaluate the dialogue systems in a remote setting under virtually no supervision. Crowdso...
Paid crowdsourcing platforms, which harness the extrinsic motivation of monetary compensation, are seeing increased usage for design feedback tasks, but the low quality of design feedback from crowdsourced workers continues to be a problem for designers. Intrinsic motivation has the potential to increase the quality of worker responses, but is difficult to elicit in paid workers. In this paper,...
The Mechanical Turk crowdsourcing platform currently fails to provide the most basic piece of information to enable workers to make informed decisions about which tasks to undertake: what is the expected hourly pay? Mechanical Turk advertises a reward amount per assignment, but does not give any indication of how long each assignment will take. We have developed a browser plugin that tracks the...
Collecting human judgments for music similarity evaluation has always been a difficult and time consuming task. This paper explores the viability of Amazon Mechanical Turk (MTurk) for collecting human judgments for audio music similarity evaluation tasks. We compared the similarity judgments collected from Evalutron6000 (E6K) and MTurk using the Music Information Retrieval Evaluation eXchange 2...
Computerized generation of humor is a notoriously difficult AI problem. We develop an algorithm called Libitum that helps humans generate humor in a Mad Lib R ©, which is a popular fill-in-the-blank game. The algorithm is based on a machine learned classifier that determines whether a potential fill-in word is funny in the context of the Mad Lib story. We use Amazon Mechanical Turk to create gr...
Mental lexicon plays a central role in human language competence and inspires the creation of new lexical resources. The traditional linguistic experiment methodwhich is used to exploremental lexicon has some disadvantages. Crowdsourcing has become a promising method to conduct linguistic experiments which enables us to explore mental lexicon in an efficient and economic way. We focus on the fe...
A multiword is compositional if its meaning can be expressed in terms of the meaning of its constituents. In this paper, we collect and analyse the compositionality judgments for a range of compound nouns using Mechanical Turk. Unlike existing compositionality datasets, our dataset has judgments on the contribution of constituent words as well as judgments for the phrase as a whole. We use this...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید