{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:43:45.768161Z" }, "title": "EasyTurk: A User-Friendly Interface for High-Quality Linguistic Annotation with Amazon Mechanical Turk", "authors": [ { "first": "Lorenzo", "middle": [], "last": "Bocchi", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Valentino", "middle": [], "last": "Frasnelli", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Alessio", "middle": [], "last": "Palmero", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Amazon Mechanical Turk (AMT) has recently become one of the most popular crowdsourcing platforms, allowing researchers from all over the world to create linguistic datasets quickly and at a relatively low cost. Amazon provides both a web interface and an API for AMT, but they are not very user-friendly and miss some features that can be useful for NLP researchers. In this paper, we present Easy-Turk, a free tool that improves the potential of Amazon Mechanical Turk by adding to it some new features. The tool is free and released under an open source license. A video showing EasyTurk and its features is available on YouTube.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Amazon Mechanical Turk (AMT) has recently become one of the most popular crowdsourcing platforms, allowing researchers from all over the world to create linguistic datasets quickly and at a relatively low cost. Amazon provides both a web interface and an API for AMT, but they are not very user-friendly and miss some features that can be useful for NLP researchers. In this paper, we present Easy-Turk, a free tool that improves the potential of Amazon Mechanical Turk by adding to it some new features. The tool is free and released under an open source license. A video showing EasyTurk and its features is available on YouTube.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the last years, deep learning algorithms have achieved state-of-the-art results in most NLP tasks such as textual inference, machine translation, hate speech detection (Socher et al., 2012) . Despite their accuracy, deep learning algorithms have a major downside, i.e. they require large amounts of data to be trained, making the data bottleneck issue even more problematic than with other machine learning algorithms like SVM (Gheisari et al., 2017) . The need to leverage large amounts of manually annotated data has become a major challenge for the NLP community, since linguistic annotation performed by domain experts is both expensive and time-consuming. This explains why crowdsourcing platforms, offering access to a large pool of potential annotators, have been successfully used for the creation of annotated datasets.", "cite_spans": [ { "start": 171, "end": 192, "text": "(Socher et al., 2012)", "ref_id": "BIBREF13" }, { "start": 430, "end": 453, "text": "(Gheisari et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Amazon Mechanical Turk (AMT) is probably the most widely used platform of this kind, enabling the distribution of low-skill but difficult-toautomate tasks to a network of humans who could 1 https://youtu.be/OmKJOrNpGSs work in parallel, when and where they prefer, for a certain amount of money. The availability of a lot of workers at the same time allows researchers all over the world to annotate large datasets in a fraction of the time and the money needed doing it through the recruitment of domain experts. Furthermore, crowd-workers are spread all over the world, offering the possibility to have annotation performed in different languages by native speakers. In the last years, AMT turned out to be successful in a wide range of NLP annotations, such as named entities from e-mails (Lawson et al., 2010) or medical texts , subjectivity word sense disambiguation (Akkaya et al., 2010) , image captioning (Rashtchian et al., 2010) , and much more.", "cite_spans": [ { "start": 792, "end": 813, "text": "(Lawson et al., 2010)", "ref_id": "BIBREF6" }, { "start": 872, "end": 893, "text": "(Akkaya et al., 2010)", "ref_id": "BIBREF0" }, { "start": 913, "end": 938, "text": "(Rashtchian et al., 2010)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unfortunately, annotations obtained by AMT workers are often of low quality, since: (i) they are non-expert and therefore they can make mistakes in annotations; (ii) some of them are spammers who try to maximise the earnings by submitting random answers as quickly as possible. Mitigating the effect of errors in datasets annotated by crowd-workers is one of the biggest challenge in using AMT. One mitigation strategy adopted by researchers is usually to collect multiple annotations of the same instance, and apply different methods to deal with this information redundancy. Most of the times, majority voting seems to be an appropriate strategy, i.e. the final label assigned to an instance is the one provided by the majority of the workers, even if they are not all in agreement. However, if spammers always choose the same answer to finish the task quicker, this strategy would finally assign a wrong label to the textual instance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While past works have described how to successfully deal with non-expertness (Callison-Burch, 2009; Mohammad and Turney, 2010) , it is more challenging to identify spammers. Some tools (Hovy et al., 2013) deal with the problem offline, when the task is completed, trying to identify spammers using redundant annotations and comparing the answers given by all crowd-workers. In this context, spammers are correctly identified, but they are nevertheless paid because their annotations are filtered out after the task is closed.", "cite_spans": [ { "start": 77, "end": 99, "text": "(Callison-Burch, 2009;", "ref_id": "BIBREF1" }, { "start": 100, "end": 126, "text": "Mohammad and Turney, 2010)", "ref_id": "BIBREF10" }, { "start": 185, "end": 204, "text": "(Hovy et al., 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another idea to find spammers is to use a gold standard, a set of very easy-to-understand instances, previously annotated by an expert, that a careful worker should not miss. In this paradigm, when a worker gives the wrong answer to a gold question, one may infer that the annotator is trying to cheat and should be blocked. The AMT API provide a way to do it automatically, but the feature is not included in the web interface, therefore the only way to get this result is by writing a program (in Python, php, or any supported language) that checks whether the gold instances have been answered correctly or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we describe EasyTurk, both a web interface and a powerful API that tackles all these issues and enhances the experience of using AMT. The tool can aggregate more than one instance of a task in a single page shown to the worker, concealing also gold standard instances. Furthermore, EasyTurk can be configured to take an action, e.g. block a worker when he or she misses too many gold answers, marking the already-given questions as not reliable. Finally, the software is open source and its user-friendly interface has been implemented using most recent guidelines for usability and responsiveness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Amazon Mechanical Turk 2 is an online marketplace for hiring workers and submit to them atomic tasks that are usually easy for humans but difficult for machines. The atomic unit of work is called Human Intelligent Task (HIT).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Amazon Mechanical Turk", "sec_num": "2" }, { "text": "AMT has two kinds of users: requesters and workers. The formers create the HITs (using the API or the web interface) and upload them to the Amazon servers, along with the fee that will pay for each of them to be completed. The latters search the HIT database, choose the preferred tasks and complete them in exchange for monetary compensation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Amazon Mechanical Turk", "sec_num": "2" }, { "text": "Requesters can restrict the range of workers allowed to complete the task, based on demography, school level, spoken languages, and so on. Some requirements are free for the requester (for example the living country of the worker), but normally they raise the price of the HITs. Requesters can also assign custom qualifications to workers in order to filter out them during the submission of the HITs to the system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Amazon Mechanical Turk", "sec_num": "2" }, { "text": "The platform also provides an automatic mechanism that allows multiple unique workers to complete the same HIT. This is useful, for example in NLP tasks, for which requesters usually need more than one answer for each HIT, so that the majority label can be selected, resulting in a higher-quality final annotation thanks to the 'wisdom of the crowd'. Each annotation instance (a pair worker-HIT) is called assignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Amazon Mechanical Turk", "sec_num": "2" }, { "text": "Requesters have the option of rejecting the answer of a particular worker, in which case they are not paid. The above-described custom qualifications can be used to filter out, for a particular task, workers who did not reach sufficient accuracy in previous HITs. In specific cases, for example as a consequence of particularly sloppy annotations, a worker can be blocked and is not able to perform HITs for the requester anymore.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Amazon Mechanical Turk", "sec_num": "2" }, { "text": "One of the main issues with using AMT is that some features are available only using the API, while others can be used only in the web interface. For example, through the web interface a requester can upload a TSV file with the data to be annotated, or select which qualifications the workers should have to complete the HITs. These two features are not available in the API, but one can automatise acceptance/rejection of the worker job only through it. Given the above constraints, we developed Easy-Turk so to allow non-skilled users to submit HITs without using a specific programming language, such as Python or Java, while using the features available through APIs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Amazon Mechanical Turk", "sec_num": "2" }, { "text": "EasyTurk is composed of three modules: (i) the web interface; (ii) the API; (iii) the server. Most of the features included in EasyTurk are accessible directly from the web interface, but are managed by the server.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description of EasyTurk", "sec_num": "3" }, { "text": "The original web interface of AMT has a powerful graphical editor for the templates, used by the requester to display the data they want the worker to annotate. After creating the template file, one can upload a text document with the data (usually a CSV or XML file), and then AMT submits the HITs (one per file record/line) to the workforce.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "More annotations in one HIT", "sec_num": "3.1" }, { "text": "In NLP, it often happens that a task corresponds to a binary assignment, meaning that an instance is labeled with a value in the set true/false. Usually researchers have a list of instances in one single file (for example a JSON or CSV file). Submitting the record one by one, one per HIT, would be more expensive for the requester and time-consuming for workers, because they would need to click the confirm button after each instance annotation and wait for the new HIT to load, even if it is just a sentence or a short string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "More annotations in one HIT", "sec_num": "3.1" }, { "text": "In EasyTurk the requester can go beyond this limitation easily, by creating a template with multiple slots for the data. Then, using a sequential naming standard (for example, text1, text2, text3, etc.), the tool will automatically infer the number of records to fill in the template.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "More annotations in one HIT", "sec_num": "3.1" }, { "text": "In AMT, the requester has two options to check the annotation accuracy. First, they can perform an offline check (after the whole task has ended) using the information obtained by majority voting (Hovy et al., 2013) . As an alternative, AMT provides a mechanism to check the answer of a HIT against a gold standard. Depending on the worker answer, the system can accept or reject the HIT automatically. As outlined in Section 2, this is one of the features available only in the API, and missing in the web interface.", "cite_spans": [ { "start": 196, "end": 215, "text": "(Hovy et al., 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Upload of a gold standard", "sec_num": "3.2" }, { "text": "In EasyTurk, the requester can optionally add a document with some additional data containing the correct annotation. When populating the template, they can select how many gold instances need to be added for each HIT (see Figure 1) , and decide -among a set of available options -the behavior of the system when the worker misses the gold instance(s).", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 232, "text": "Figure 1)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Upload of a gold standard", "sec_num": "3.2" }, { "text": "In order to avoid that a worker is blocked or restricted for having missed a single answer, the system can check the accuracy of the workers on a span of HITs, and then take action after the worker completed at least that span (see Figure 2 ).", "cite_spans": [], "ref_spans": [ { "start": 232, "end": 240, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Upload of a gold standard", "sec_num": "3.2" }, { "text": "When a worker misses a considerable amount of gold instances, the requester can decide what will be the behavior of the tool. Figure 2 shows the range of possible options. First of all, one has to decide whether to accept or reject the assignment. In the second case, the worker can be restricted or blocked. With restriction, it is intended that this worker cannot participate any more in the tasks of the current project, but they are allowed to complete HITs when a new project from the same requester is submitted to AMT. EasyTurk uses AMT qualifications to this purpose. 3 When a worker is blocked, instead, they will not see any more any HIT submitted by the requester. Both properties (restriction and block) are reversible.", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 134, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Automatic block/restrict the workers", "sec_num": "3.3" }, { "text": "To limit spammers (see Section 1), a worker can be blocked/restricted also when HITs are being submitted by a worker too fast, showing for example that the worker is not even reading the instances before annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic block/restrict the workers", "sec_num": "3.3" }, { "text": "When running EasyTurk, the user is asked to provide an administration password. With this credentials, the administrator can create new users, each of which having its own username and password. Each user is then linked to its AMT API keys, allowing a single instance of EasyTurk to serve different users having different AMT accounts. A flag can be set to switch a user to work on the Sandbox version of AMT. sociated with a qualification: when a requester wants to restrict a worker, the tool assigns the qualification to the worker, and consequently the task is hidden in the AMT worker console for them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User management", "sec_num": "3.4" }, { "text": "The web interface of EasyTurk is written using VueJS. 4 The structure of the website is build with Tailwind CSS 5 , the design is inspired by Material Design. 6 Through the interface, requesters can group HITs into projects, and follow all the steps from the project definition to the visualisation of the results.", "cite_spans": [ { "start": 54, "end": 55, "text": "4", "ref_id": null }, { "start": 159, "end": 160, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The web interface", "sec_num": "3.5" }, { "text": "Project definition. The general information about the project (description, reward, time alotted for the workers, layout, qualifications needed, and so on) are given and a project is created.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The web interface", "sec_num": "3.5" }, { "text": "Data insertion. In this phase, a file with the data is uploaded to the system (plus an additional file, if needed, for the gold standard, see Section 3.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The web interface", "sec_num": "3.5" }, { "text": "HITs generation. The HITs are generated by grouping the data (depending on how many items the requester wants for each HIT) and optionally mixing it with the gold standard (Figure 1 ).", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 181, "text": "(Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The web interface", "sec_num": "3.5" }, { "text": "Condition management. The requester sets the tool behaviour in specific cases, for instance when a worker misses the gold standard (Figure 2) .", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 141, "text": "(Figure 2)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "The web interface", "sec_num": "3.5" }, { "text": "HITs submission. The HITs are submitted to AMT in bunches of predetermined size (set by the requester).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The web interface", "sec_num": "3.5" }, { "text": "HITs monitoring. The dot matrix interface gives an overview on how the task is going (see Figure 3) . In this phase, the requester can control all the aspects of the annotations: the approval rate, the speed, the workers, and so on.", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 99, "text": "Figure 3)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "The web interface", "sec_num": "3.5" }, { "text": "Retrieval of Results. The resulting annotations (even when the gold is missed or the HIT is rejected) can be visualised and downloaded in JSON format.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The web interface", "sec_num": "3.5" }, { "text": "In developing EasyTurk, we wanted to stress the importance of having a readable overview of how the annotation is going, from the HITs submission to the retrieval of the results. We found the dot matrix chart 7 to be an effective solution to achieve this goal (see Figure 3) . Each dot represents a HIT and is painted with a different color depending on how many assignments have been rejected or whether the gold instances have been missed. Different colorization strategies have been chosen to highlight the different status of the HITs: unassigned, pending, completed. Using this interface, a considerable presence of red dots may point out that the gold standard was ambiguous, allowing the requester to tune it better in the future.", "cite_spans": [], "ref_spans": [ { "start": 265, "end": 274, "text": "Figure 3)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "The web interface", "sec_num": "3.5" }, { "text": "An API supporting the web interface and written in php is included in the EasyTurk package. It can be used also as a standalone program to integrate the features of the tool into third-part packages. Since the web interface relies on this API to work properly, it is mandatory to install it to take advantage of the web interface.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The API", "sec_num": "3.6" }, { "text": "The last part of EasyTurk is a server script, written in php. It performs all the tasks needed to update the information based on the AMT APIs (for example, the status of a HIT or the triggering of the actions described in Section 3.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The server", "sec_num": "3.7" }, { "text": "EasyTurk can also be configured to work with Amazon Simple Notification Service 8 (SNS), so that most of the information about the HITs can be updated almost in real time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The server", "sec_num": "3.7" }, { "text": "EasyTurk is completely free, available on GitHub, 9 and released as open-source under the Apache 2.0 license. 10 The web interface is developed in VueJS and needs NodeJS 11 to be compiled and launched.", "cite_spans": [ { "start": 50, "end": 51, "text": "9", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Release", "sec_num": "4" }, { "text": "Both the API and the server are written in php 12 and need a machine with at least version 7 of the interpreter and MySQL server 13 installed. The server can be run as a service and does not need other particular dependencies to work. The API, instead, must be configured to work in a web server (such as Apache 14 or Nginx 15 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Release", "sec_num": "4" }, { "text": "Since 2005, when AMT was released, an increasing number of researchers has used this platform for research purposes. In particular, the NLP community has taken advantage of AMT to bring linguistic resources to a new scale, also with the support of Amazon. For example, in 2010 Amazon sponsored a workshop during the NAACL conference, where researchers were given 100 dollars of credit on the platform to run an annotation task and answer some meta-research questions, such as how nonexpert workers can perform complex annotations, or how can one ensure high quality annotations from crowd-sourced contributors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Some past works have dealt with the abovementioned issues related to crowd-worker quality. In (Hovy et al., 2013) , the authors present a software that, after a round of annotations using AMT, tries to understand which workers perform better and, consequently, which are the best annotations to consider and which to discard when there is redundancy, in an unsupervised fashion. In (Wais et al., 2010) , the efficiency of AMT is analysed over 100,000 local business listings for an online directory. A mechanism for filtering low-quality workers in order to build a reliable workforce that has high accuracy is described, to understand better the problem of quality control in crowdsourcing systems.", "cite_spans": [ { "start": 94, "end": 113, "text": "(Hovy et al., 2013)", "ref_id": "BIBREF5" }, { "start": 382, "end": 401, "text": "(Wais et al., 2010)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Some attempts have also been done to improve the potential of AMT by writing new frameworks on top of the AMT API. CloudResearch, formerly TurkPrime, (Litman et al., 2017) was born for this purpose and at the time of launch was free to use for researchers. Now it is part of a bigger company and is not free any more. LingoTurk (Pusse et al., 2016) is an open-source, freely available crowdsourcing client/server system aimed primarily at psycholinguistic experimentation, where custom and specialized user interfaces are required but not supported by popular crowdsourcing task management platforms. OpenMTurk (Feeney et al., 2018) is a free and open-source administration tool for managing research studies using AMT. TurKit (Little et al., 2010) is a toolkit for prototyping and exploring truly algorithmic human computation, while maintaining a straightforward imperative programming style. Turktools (Erlewine and Kotek, 2016) is a set of free, open-source tools that allow linguists to post studies online and simplify the interaction with AMT. TurkGate 16 provides better control and verification of workers' access to an external site and allows the grouping of HITs, so that workers may only access one survey within a group. AMTI, 17 developed at the Allen Institute for AI, is a command-line interface for AMT that 16 https://github.com/gideongoldin/ TurkGate 17 https://github.com/allenai/amti emphasizes the ability to quickly iterate on and run reproducible crowdsourcing experiments. Finally, AMT is integrated to add human annotations in more complex tools. Qurk (Marcus et al., 2011) , for example, is a query system for managing annotation workflows.", "cite_spans": [ { "start": 150, "end": 171, "text": "(Litman et al., 2017)", "ref_id": "BIBREF7" }, { "start": 328, "end": 348, "text": "(Pusse et al., 2016)", "ref_id": "BIBREF11" }, { "start": 611, "end": 632, "text": "(Feeney et al., 2018)", "ref_id": "BIBREF3" }, { "start": 1326, "end": 1328, "text": "16", "ref_id": null }, { "start": 1579, "end": 1600, "text": "(Marcus et al., 2011)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this paper, we presented EasyTurk, a free program that improves the potential of Amazon Mechanical Turk by adding some features which are not present out-of-the-box. In particular, the requester has now the ability to insert multiple instances of the task in a single HIT, and optionally mix them with a gold standard, that can be used to track the accuracy of the workers. Finally, when some events are triggered (for example a worker answering too quickly to a HIT or missing the gold standard), EasyTurk can be programmed to take an action such as reject the assignment, or block/restrict the worker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "The tool is free and open source, and can be downloaded from GitHub and installed locally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "In the future, we are planning to implement new features. For example, the system can intercept spammers using also a particular pattern of answers (for example a set of HIT where the same answer is always selected). We also would like to include in EasyTurk a collection of templates for basic annotations (for example, yes/no, a set of possible answers, a free text, and so on), so that requesters do not need any more to create their template on the AMT website. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "http://www.mturk.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A qualification is a custom property that a requester can assign to one or more workers. In EasyTurk, each project is as-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://vuejs.org/ 5 https://tailwindcss.com/ 6 https://material.io/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://datascientist.reviews/ dot-matrix-chart/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Amazon mechanical turk for subjectivity word sense disambiguation", "authors": [ { "first": "Cem", "middle": [], "last": "Akkaya", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Conrad", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk", "volume": "", "issue": "", "pages": "195--203", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cem Akkaya, Alexander Conrad, Janyce Wiebe, and Rada Mihalcea. 2010. Amazon mechanical turk for subjectivity word sense disambiguation. In Proceed- ings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechan- ical Turk, pages 195-203, Los Angeles. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Fast, cheap, and creative: Evaluating translation quality using amazon's mechanical turk", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "1", "issue": "", "pages": "286--295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch. 2009. Fast, cheap, and creative: Evaluating translation quality using amazon's me- chanical turk. In Proceedings of the 2009 Confer- ence on Empirical Methods in Natural Language Processing: Volume 1 -Volume 1, EMNLP '09, page 286-295, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A streamlined approach to online linguistic surveys", "authors": [ { "first": "Hadas", "middle": [], "last": "Michael Yoshitaka Erlewine", "suffix": "" }, { "first": "", "middle": [], "last": "Kotek", "suffix": "" } ], "year": 2016, "venue": "Natural Language & Linguistic Theory", "volume": "34", "issue": "", "pages": "481--495", "other_ids": { "DOI": [ "10.1007/s11049-015-9305-9" ] }, "num": null, "urls": [], "raw_text": "Michael Yoshitaka Erlewine and Hadas Kotek. 2016. A streamlined approach to online linguistic surveys. Natural Language & Linguistic Theory, 34(2):481- 495.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "OpenMTurk: An Open-Source Administration Tool for Designing Robust MTurk Studies", "authors": [ { "first": "Justin", "middle": [], "last": "Feeney", "suffix": "" }, { "first": "Gordon", "middle": [], "last": "Pennycook", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Boxtel", "suffix": "" } ], "year": 2018, "venue": "SSRN Electronic Journal", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.2139/ssrn.3265409" ] }, "num": null, "urls": [], "raw_text": "Justin Feeney, Gordon Pennycook, and Matthew Box- tel. 2018. OpenMTurk: An Open-Source Admin- istration Tool for Designing Robust MTurk Studies. SSRN Electronic Journal.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A survey on deep learning in big data", "authors": [ { "first": "M", "middle": [], "last": "Gheisari", "suffix": "" }, { "first": "G", "middle": [], "last": "Wang", "suffix": "" }, { "first": "M", "middle": [ "Z A" ], "last": "Bhuiyan", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC)", "volume": "2", "issue": "", "pages": "173--180", "other_ids": { "DOI": [ "10.1109/CSE-EUC.2017.215" ] }, "num": null, "urls": [], "raw_text": "M. Gheisari, G. Wang, and M. Z. A. Bhuiyan. 2017. A survey on deep learning in big data. In 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Con- ference on Embedded and Ubiquitous Computing (EUC), volume 2, pages 173-180.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning whom to trust with MACE", "authors": [ { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1120--1130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120-1130, Atlanta, Georgia. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Annotating large email datasets for named entity recognition with mechanical turk", "authors": [ { "first": "Nolan", "middle": [], "last": "Lawson", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Eustice", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Perkowitz", "suffix": "" }, { "first": "", "middle": [], "last": "Meliha Yetisgen-Yildiz", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk", "volume": "", "issue": "", "pages": "71--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nolan Lawson, Kevin Eustice, Mike Perkowitz, and Meliha Yetisgen-Yildiz. 2010. Annotating large email datasets for named entity recognition with me- chanical turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, pages 71- 79, Los Angeles. Association for Computational Lin- guistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences", "authors": [ { "first": "Leib", "middle": [], "last": "Litman", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Robinson", "suffix": "" }, { "first": "Tzvi", "middle": [], "last": "Abberbock", "suffix": "" } ], "year": 2017, "venue": "Behavior Research Methods", "volume": "49", "issue": "2", "pages": "433--442", "other_ids": { "DOI": [ "10.3758/s13428-016-0727-z" ] }, "num": null, "urls": [], "raw_text": "Leib Litman, Jonathan Robinson, and Tzvi Abberbock. 2017. TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(2):433-442.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Turkit: Human computation algorithms on mechanical turk", "authors": [ { "first": "Greg", "middle": [], "last": "Little", "suffix": "" }, { "first": "Lydia", "middle": [ "B" ], "last": "Chilton", "suffix": "" }, { "first": "Max", "middle": [], "last": "Goldman", "suffix": "" }, { "first": "Robert", "middle": [ "C" ], "last": "Miller", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, UIST '10", "volume": "", "issue": "", "pages": "57--66", "other_ids": { "DOI": [ "10.1145/1866029.1866040" ] }, "num": null, "urls": [], "raw_text": "Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. 2010. Turkit: Human computa- tion algorithms on mechanical turk. In Proceedings of the 23nd Annual ACM Symposium on User In- terface Software and Technology, UIST '10, page 57-66, New York, NY, USA. Association for Com- puting Machinery.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Crowdsourced databases: Query processing with people", "authors": [ { "first": "Adam", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Madden", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "211--214", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Marcus, Eugene Wu, Samuel Madden, and Robert Miller. 2011. Crowdsourced databases: Query processing with people. pages 211-214.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text", "volume": "", "issue": "", "pages": "26--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using me- chanical turk to create an emotion lexicon. In Pro- ceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Gener- ation of Emotion in Text, pages 26-34, Los Angeles, CA. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "LingoTurk: managing crowdsourced tasks for psycholinguistics", "authors": [ { "first": "Florian", "middle": [], "last": "Pusse", "suffix": "" }, { "first": "Asad", "middle": [], "last": "Sayeed", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", "volume": "", "issue": "", "pages": "57--61", "other_ids": { "DOI": [ "10.18653/v1/N16-3012" ] }, "num": null, "urls": [], "raw_text": "Florian Pusse, Asad Sayeed, and Vera Demberg. 2016. LingoTurk: managing crowdsourced tasks for psy- cholinguistics. In Proceedings of the 2016 Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics: Demonstrations, pages 57-61, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Collecting image annotations using Amazon's mechanical turk", "authors": [ { "first": "Cyrus", "middle": [], "last": "Rashtchian", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Young", "suffix": "" }, { "first": "Micah", "middle": [], "last": "Hodosh", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk", "volume": "", "issue": "", "pages": "139--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. 2010. Collecting image annota- tions using Amazon's mechanical turk. In Proceed- ings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechan- ical Turk, pages 139-147, Los Angeles. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Deep learning for nlp (without magic)", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2012, "venue": "Tutorial Abstracts of ACL 2012, ACL '12", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Yoshua Bengio, and Christopher D. Manning. 2012. Deep learning for nlp (without magic). In Tutorial Abstracts of ACL 2012, ACL '12, page 5, USA. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Towards building a high-quality workforce with mechanical turk", "authors": [ { "first": "Paul", "middle": [], "last": "Wais", "suffix": "" }, { "first": "Shivaram", "middle": [], "last": "Lingamneni", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Fennell", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Goldenberg", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Lubarov", "suffix": "" }, { "first": "David", "middle": [], "last": "Marin", "suffix": "" }, { "first": "Hari", "middle": [], "last": "Simons", "suffix": "" } ], "year": 2010, "venue": "Proc. NIPS Workshop on Computational Social Science and the Wisdom of Crowds", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Wais, Shivaram Lingamneni, Duncan Cook, Ja- son Fennell, Benjamin Goldenberg, Daniel Lubarov, David Marin, and Hari Simons. 2010. Towards building a high-quality workforce with mechanical turk. In In Proc. NIPS Workshop on Computational Social Science and the Wisdom of Crowds.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Preliminary experience with amazon's mechanical turk for annotating medical named entities", "authors": [ { "first": "Meliha", "middle": [], "last": "Yetisgen", "suffix": "" }, { "first": "Imre", "middle": [], "last": "Solti", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Halgrim", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meliha Yetisgen, Imre Solti, Fei Xia, and Scott Hal- grim. 2010. Preliminary experience with amazon's mechanical turk for annotating medical named enti- ties.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "num": null, "text": "Selection box for mixing gold and unknown data.", "type_str": "figure" }, "FIGREF2": { "uris": null, "num": null, "text": "Selection box for managing the behavior of the tool depending on the workers' answers.", "type_str": "figure" }, "FIGREF3": { "uris": null, "num": null, "text": "The dot matrix showing the HITs.", "type_str": "figure" } } } }