نتایج جستجو برای: turk
تعداد نتایج: 2214 فیلتر نتایج به سال:
We introduce and test a crowdsourcing technique for creative problem solving. In total, 1853 workers from Amazon’s Mechanical Turk produced 468 ideas for solving the 2010 oil spill problem in the Gulf of Mexico. In our technique, one crowd generated initial ideas, another crowd evaluated the creativity of these ideas, and yet another crowd combined pairs of ideas produced by the previous crowd....
In this paper we present a mechanism for determining nearoptimal prices for tasks in online labor markets, often used for crowdsourcing. In particular, the mechanisms are designed to handle the intricacies of markets like Mechanical Turk where workers arrive online and requesters have budget constraints. The mechanism is incentive compatible, budget feasible, and has competitive ratio performan...
Direct behavior observation, i.e., without first creating a video recording, is a challenging, one-shot task. The behavior has to be coded accurately during the situation itself. Mobile devices can assist direct observation, and there already are applications available for these purposes. However, the mobile revolution has led to new developments in devices, infrastructure, and market penetrati...
Profanity detection is often thought to be an easy task. However, past work has shown that current, list-based systems are performing poorly. They fail to adapt to evolving profane slang, identify profane terms that have been disguised or only partially censored (e.g., @ss, f$#%) or intentionally or unintentionally misspelled (e.g., biatch, shiiiit). For these reasons, they are easy to circumve...
Online labor markets such as Amazon Mechanical Turk (MTurk) have emerged as platforms that facilitate the allocation of productive effort across global economies. Many of these markets compensate workers with monetary payments. We study the effects of performance-contingent financial rewards on work quality and worker effort in MTurk via two experiments. We find that the magnitude of performanc...
Crowd-sourcing approaches such as Amazon’s Mechanical Turk (MTurk) make it possible to annotate or collect large amounts of linguistic data at a relatively low cost and high speed. However, MTurk offers only limited control over who is allowed to particpate in a particular task. This is particularly problematic for tasks requiring free-form text entry. Unlike multiple-choice tasks there is no c...
In this paper we examine the effect of linguistic devices on recall and comprehension in information presentation using both recall and eye-tracking data. In addition, the results were validated via an experiment using Amazon’s Mechanical Turk micro-task environment.
While human behavior models based on repeated Stackelberg games have been proposed for domains such as “wildlife crime” where there is repeated interaction between the defender and the adversary, there has been no empirical study with human subjects to show the effectiveness of such models. This paper presents an initial study based on extensive human subject experiments with participants on Am...
This paper explores the task of building an accurate prepositional phrase attachment corpus for new genres while avoiding a large investment in terms of time and money by crowdsourcing judgments. We develop and present a system to extract prepositional phrases and their potential attachments from ungrammatical and informal sentences and pose the subsequent disambiguation tasks as multiple choic...
In this paper we give an introduction to using Amazon’s Mechanical Turk crowdsourcing platform for the purpose of collecting data for human language technologies. We survey the papers published in the NAACL2010 Workshop. 24 researchers participated in the workshop’s shared task to create data for speech and language applications with $100.
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید