نتایج جستجو برای: turk

تعداد نتایج: 2214  

Journal: :Remote Sensing 2015
Ling Yu Sheryl B. Ball Christine E. Blinn Klaus Moeltner Seth Peery Valerie A. Thomas Randolph H. Wynne

We recruit an online labor force through Amazon.com’s Mechanical Turk platform to identify clouds and cloud shadows in Landsat satellite images. We find that a large group of workers can be mobilized quickly and relatively inexpensively. Our results indicate that workers’ accuracy is insensitive to wage, but deteriorates with the complexity of images and with time-on-task. In most instances, hu...

2015
Neil Stewart Christoph Ungemach Adam J. L. Harris Daniel M. Bartels Ben R. Newell Gabriele Paolacci Jesse Chandler Jon Baron Leif Nelson Stian Reimers

Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundr...

2015
Nitin Madnani Martin Chodorow Aoife Cahill Melissa Lopez Yoko Futagi Yigal Attali

Providing writing feedback to English language learners (ELLs) helps them learn to write better, but it is not clear what type or how much information should be provided. There have been few experiments directly comparing the effects of different types of automatically generated feedback on ELL writing. Such studies are difficult to conduct because they require participation and commitment from...

2014
Stuart Schechter Cristian Bravo-Lillo

We introduce a survey instrument for anticipating otherwise-unforeseen risks resulting from research experiments. We present experiments hypothetically, then ask: “If someone you cared about were a candidate participant for this experiment, would you want that person to be included as a participant?” (Q1) and “Do you believe the researchers should be allowed to proceed with this experiment?” (Q...

2015
Tim Straub Florian Hawlitschek Christof Weinhardt

During the last decade Amazon Mechanical Turk has evolved to an established platform for conducting behavioral research. However, designing and conducting economic experiments on online labor markets remains a complex and ambitious task. In comparison to laboratory environments, a set of specific challenges (such as synchronization and control) has to be thoroughly addressed. In order to suppor...

2014
James Cheng Monisha Manoharan Matthew Lease Yan Zhang

We investigate the feasibility of crowd-based medical diagnosis by posting medical cases on a variety of crowdsourcing platforms: general and specialized volunteer question answering sites, and pay-based Mechanical Turk (MTurk) and oDesk. To assess the crowd’s ability to diagnose cases of varying difficulty, three sets of medical cases are considered. While volunteer channels proved ineffective...

2012
Cheng-wei Chan Jane Yung-jen Hsu

This paper introduces SIFU, a system that recruits in real time native speakers as online volunteer tutors to help answer questions from Chinese language learners in reading news articles. SIFU integrates the strengths of two effective online language learning methods: reading online news and communicating with online native speakers. SIFU recruits volunteers from an online social network rathe...

Journal: :CoRR 2016
Sephora Madjiheurem Valentina Sintsova Pearl Pu

Online labor platforms, such as the Amazon Mechanical Turk, provide an effective framework for eliciting responses to judgment tasks. Previous work has shown that workers respond best to financial incentives, especially to extra bonuses. However, most of the tested incentives involve describing the bonus conditions in formulas instead of plain English. We believe that different incentives given...

2012
Chien-Ju Ho Yu Zhang Jennifer Wortman Vaughan Mihaela van der Schaar

Crowdsourcing markets, such as Amazon Mechanical Turk, provide a platform for matching prospective workers around the world with tasks. However, they are often plagued by workers who attempt to exert as little effort as possible, and requesters who deny workers payment for their labor. For crowdsourcing markets to succeed, it is essential to discourage such behavior. With this in mind, we propo...

2010
Michael Heilman Noah A. Smith

We use Amazon Mechanical Turk to rate computer-generated reading comprehension questions about Wikipedia articles. Such application-specific ratings can be used to train statistical rankers to improve systems’ final output, or to evaluate technologies that generate natural language. We discuss the question rating scheme we developed, assess the quality of the ratings that we gathered through Am...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید