نتایج جستجو برای: turk

تعداد نتایج: 2214  

2011
Jin Bao Yasuaki Sakamoto

We introduce and test a crowdsourcing technique for creative problem solving. In total, 1853 workers from Amazon’s Mechanical Turk produced 468 ideas for solving the 2010 oil spill problem in the Gulf of Mexico. In our technique, one crowd generated initial ideas, another crowd evaluated the creativity of these ideas, and yet another crowd combined pairs of ideas produced by the previous crowd....

2011
Yaron Singer Manas Mittal

In this paper we present a mechanism for determining nearoptimal prices for tasks in online labor markets, often used for crowdsourcing. In particular, the mechanisms are designed to handle the intricacies of markets like Mechanical Turk where workers arrive online and requesters have budget constraints. The mechanism is incentive compatible, budget feasible, and has competitive ratio performan...

Journal: :iJIM 2015
Daniel Wessel

Direct behavior observation, i.e., without first creating a video recording, is a challenging, one-shot task. The behavior has to be coded accurately during the situation itself. Mobile devices can assist direct observation, and there already are applications available for these purposes. However, the mobile revolution has led to new developments in devices, infrastructure, and market penetrati...

2012
Sara Owsley Sood Judd Antin Elizabeth F. Churchill

Profanity detection is often thought to be an easy task. However, past work has shown that current, list-based systems are performing poorly. They fail to adapt to evolving profane slang, identify profane terms that have been disguised or only partially censored (e.g., @ss, f$#%) or intentionally or unintentionally misspelled (e.g., biatch, shiiiit). For these reasons, they are easy to circumve...

2013
Ming Yin Yiling Chen Yu-An Sun

Online labor markets such as Amazon Mechanical Turk (MTurk) have emerged as platforms that facilitate the allocation of productive effort across global economies. Many of these markets compensate workers with monetary payments. We study the effects of performance-contingent financial rewards on work quality and worker effort in MTurk via two experiments. We find that the magnitude of performanc...

2010
Cyrus Rashtchian Peter Young Micah Hodosh Julia Hockenmaier

Crowd-sourcing approaches such as Amazon’s Mechanical Turk (MTurk) make it possible to annotate or collect large amounts of linguistic data at a relatively low cost and high speed. However, MTurk offers only limited control over who is allowed to particpate in a particular task. This is particularly problematic for tasks requiring free-form text entry. Unlike multiple-choice tasks there is no c...

2009
Martin I. Tietze Andi Winterboer Johanna D. Moore

In this paper we examine the effect of linguistic devices on recall and comprehension in information presentation using both recall and eye-tracking data. In addition, the results were validated via an experiment using Amazon’s Mechanical Turk micro-task environment.

2015
Debarun Kar Fei Fang Francesco Maria Delle Fave Nicole Sintov Milind Tambe Arlette van Wissen

While human behavior models based on repeated Stackelberg games have been proposed for domains such as “wildlife crime” where there is repeated interaction between the defender and the adversary, there has been no empirical study with human subjects to show the effectiveness of such models. This paper presents an initial study based on extensive human subject experiments with participants on Am...

2010
Mukund Jha Jacob Andreas Kapil Thadani Sara Rosenthal Kathleen McKeown

This paper explores the task of building an accurate prepositional phrase attachment corpus for new genres while avoiding a large investment in terms of time and money by crowdsourcing judgments. We develop and present a system to extract prepositional phrases and their potential attachments from ungrammatical and informal sentences and pose the subsequent disambiguation tasks as multiple choic...

2010
Chris Callison-Burch Mark Dredze

In this paper we give an introduction to using Amazon’s Mechanical Turk crowdsourcing platform for the purpose of collecting data for human language technologies. We survey the papers published in the NAACL2010 Workshop. 24 researchers participated in the workshop’s shared task to create data for speech and language applications with $100.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید