input
stringlengths
3.23k
13.3k
output
listlengths
1
3
id
stringlengths
40
40
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We define the Effective Word Score of score x as EFWS(x) = N(+x) - N(-x), where N(x) is the number of words in the tweet with polarity score x. Output:
[ "How is effective word score calculated?" ]
task461-730c07fc1af545b8b5aba3c59caa153b
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: In this study, we focused on the LDA approaches to recommendation systems and given the importance of research, we have studied recent impressive articles on this subject and presented a taxonomy of recommendation systems based on LDA of the recent research. we evaluated ISWC and WWW conferences articles from DBLP website and used the Gibbs sampling algorithm as an evaluation parameter. We succeeded in discovering the relationship between LDA topics and paper features and also obtained the researchers' interest in research field. To perform approximate inference and learning LDA, there are many inference methods for LDA topic model such as Gibbs sampling, collapsed Variational Bayes, Expectation Maximization. Gibbs sampling is a popular technique because of its simplicity and low latency. However, for large numbers of topics, Gibbs sampling can become unwieldy. In this paper, we use Gibbs Sampling in our experiment in section 5. Output:
[ "How they utilize LDA and Gibbs sampling to evaluate ISWC and WWW publications?" ]
task461-4fd731ccee8e423e9b31fcfe719cc223
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution. We thus employ self-attention on question. Output:
[ "Does the model incorporate coreference and entailment?" ]
task461-4a96f59e0f7d491587449e6976b6c212
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Since pre-trained models operate on subword level, we need to estimate subword translation probabilities. For the purpose of evaluating the contextual representations learned by our model, we do not use part-of-speech tags. Contextualized representations are directly fed into Deep-Biaffine layers to predict arc and label scores. Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing. Output:
[ "What metrics are used for evaluation?" ]
task461-dbd9820af1204757ac3f6c2bc927b3c4
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: First, the GRU only allows sentences to have context from sentences before them, but not after them. This prevents information propagation from future sentences. Second, the supporting sentences may be too far away from each other on a word level to allow for these distant sentences to interact through the word level GRU. Output:
[ "Why is supporting fact supervision necessary for DMN?" ]
task461-650d54f848eb4f4aa2a9db92655ff0f4
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\mathcal {X}$ to domain $\mathcal {Y}$) and dual task (mapping from domain $\mathcal {Y}$ to $\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. Output:
[ "How does muli-agent dual learning work?" ]
task461-b2313616394a4913b8571eba58621095
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We perform experiments (§ SECREF5 ) on two English-Japanese translation corpora to evaluate the method's utility in improving translation accuracy and reducing the time required for training. Output:
[ "What language pairs did they experiment with?" ]
task461-8b6b6e9b04e7483e97e0dadc7cc32a56
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable). Output:
[ "What type of neural model was used?" ]
task461-3ca7f9e06bd54473a6575bf09e9e6151
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: BIBREF14 introduce EMPATHETICDIALOGUES dataset, a novel dataset containing 25k conversations include emotional contexts information to facilitate training and evaluating the textual conversational system. Then, work from BIBREF2 produce a dataset containing 1.5 million Twitter conversation, gathered by using Twitter API from customer care account of 62 brands across several industries. This dataset was used to build tone-aware customer care chatbot. Finally, BIBREF29 tried to enhance SEMAINE corpus BIBREF30 by using crowdsourcing scenario to obtain a human judgement for deciding which response that elicits positive emotion. Output:
[ "What are the currently available datasets for EAC?" ]
task461-dad4b694c9b44001b7df72f68ff67334
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: The used classifiers are Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF) and gradient boosting (XGB). Output:
[ "What classical machine learning algorithms are used?" ]
task461-ea0bf71696fe4b519cae076138cd196e
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We present a comparison of the proposed models to existing word embeddings approaches. These are: the Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model. Output:
[ "What word embeddings do they test?" ]
task461-5c45869f507047c69ebd785e309d31cb
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: The Title-to-Story system is a baseline, which generates directly from topic. Output:
[ "What are the baselines?" ]
task461-dbd9c1ac5e044622a5069f677e699c9b
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: time: decade (classes between 1900s and 2010s) and year representative of the time when the genre became meainstream Output:
[ "Which decades did they look at?" ]
task461-1c97f931714a494b9e96eed18345b948
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). Output:
[ "What the system designs introduced?" ]
task461-a87630839f57469789949450776503ab
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: These recordings contain background noise (as is expected in a mall) and have different languages (Indians speak a mixture of English and Hindi). Output:
[ "What languages are considered?" ]
task461-58eb421446034bd3a4d002fde9ffeea3
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details. Results and Discussion Output:
[ "What are the six target languages?" ]
task461-3c2cf90a4e5642a58004fe8970fca849
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: For testing, we also applied the TongueNet on the UBC database BIBREF14 without any training to see the generalization ability of the model. Output:
[ "What previously annotated databases are available?" ]
task461-202403332b1e4addba3505445b78b4d1
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: For all the tasks in our experimental study, we use 36 millions English tweets collected between August and September 2017. Output:
[ "Do they report results only on English datasets?" ]
task461-3240809d8c594630842951bc16438ff6
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We can see that how 1.10intelligence is used varies by field: in computer science the most similar words include 1.10artificial and 1.10ai; in finance, similar words include 1.10abilities and 1.10consciousness. Output:
[ "Which words are used differently across ArXiv?" ]
task461-b0a084510cea48f997f59e9668b67e57
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. Output:
[ "What other languages did they translate the data from?" ]
task461-416b01259556481db06676a1cff046f7
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class. Output:
[ "Is model explanation output evaluated, what metric was used?" ]
task461-d36e3fa3f73a4b25863f39da37de16f4
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We perform our experiments using ActivityNet Captions dataset BIBREF2 that is considered as the standard benchmark for dense video captioning task. The dataset contains approximately 20k videos from YouTube and split into 50/25/25 % parts for training, validation, and testing, respectively. Output:
[ "What domain does the dataset fall into?" ]
task461-28a599c62b8d4781861bed180e573dfe
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. Output:
[ "What state of the art models were used in experiments?" ]
task461-694c01b1dbc94eb6bc819bff64e7bfd6
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0 We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1 A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0 We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1 Output:
[ "What equations are used for the trainable gating network?" ]
task461-e32d2d0696fe43d88464e50b5df59b38
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet. Output:
[ "Are trained word embeddings used for any other NLP task?" ]
task461-3b54a18403e64d57a762605d7edb41dc
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Output:
[ "What were the traditional linguistic feature-based models?", "What type of baseline are established for the two datasets?" ]
task461-48f01db5f52f4efaa962c6844a37cf13
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Results ::: CoinCollector In this setting, we compare the number of actions played in the environment (frames) and the score achieved by the agent (i.e. +1 reward if the coin is collected). In Go-Explore we also count the actions used to restore the environment to a selected cell, i.e. to bring the agent to the state represented in the selected cell. This allows a one-to-one comparison of the exploration efficiency between Go-Explore and algorithms that use a count-based reward in text-based games. Importantly, BIBREF8 showed that DQN and DRQN, without such counting rewards, could never find a successful trajectory in hard games such as the ones used in our experiments. Figure FIGREF17 shows the number of interactions with the environment (frames) versus the maximum score obtained, averaged over 10 games of the same difficulty. As shown by BIBREF8, DRQN++ finds a trajectory with the maximum score faster than to DQN++. On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively. In case of Game CoinCollector, In CookingWorld, we compared models in the three settings mentioned earlier, namely, single, joint, and zero-shot. In all experiments, we measured the sum of the final scores of all the games and their trajectory length (number of steps). Table TABREF26 summarizes the results in these three settings. Phase 1 of Go-Explore on single games achieves a total score of 19,530 (sum over all games), which is very close to the maximum possible points (i.e. 19,882), with 47,562 steps. A winning trajectory was found in 4,279 out of the total of 4,440 games. This result confirms again that the exploration strategy of Go-Explore is effective in text-based games. Next, we evaluate the effectiveness and the generalization ability of the simple imitation learning policy trained using the extracted trajectories in phase 1 of Go-Explore in the three settings mentioned above. In this setting, each model is trained from scratch in each of the 4,440 games based on the trajectory found in phase 1 of Go-Explore (previous step). As shown in Table TABREF26, the LSTM-DQN BIBREF7, BIBREF8 approach without the use of admissible actions performs poorly. One explanation for this could be that it is difficult for this model to explore both language and game strategy at the same time; it is hard for the model to find a reward signal before it has learned to model language, since almost none of its actions will be admissible, and those reward signals are what is necessary in order to learn the language model. As we see in Table TABREF26, however, by using the admissible actions in the $\epsilon $-greedy step the score achieved by the LSTM-DQN increases dramatically (+ADM row in Table TABREF26). DRRN BIBREF10 achieves a very high score, since it explicitly learns how to rank admissible actions (i.e. a much simpler task than generating text). Finally, our approach of using a Seq2Seq model trained on the single trajectory provided by phase 1 of Go-Explore achieves the highest score among all the methods, even though we do not use admissible actions in this phase. However, in this experiment the Seq2Seq model cannot perfectly replicate the provided trajectory and the total score that it achieves is in fact 9.4% lower compared to the total score achieved by phase 1 of Go-Explore. Figure FIGREF61 (in Appendix SECREF60) shows the score breakdown for each level and model, where we can see that the gap between our model and other methods increases as the games become harder in terms of skills needed. In this setting the 4,440 games are split into training, validation, and test games. The split is done randomly but in a way that different difficulty levels (recipes 1, 2 and 3), are represented with equal ratios in all the 3 splits, i.e. stratified by difficulty. As shown in Table TABREF26, the zero-shot performance of the RL baselines is poor, which could be attributed to the same reasons why RL baselines under-perform in the Joint case. Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model, even though the DRRN model has access to the admissible actions at test time, while the Seq2Seq model (as well as the LSTM-DQN model) has to construct actions token-by-token from the entire vocabulary of 20,000 tokens. On the other hand, Go-Explore Seq2Seq shows promising results by solving almost half of the unseen games. Figure FIGREF62 (in Appendix SECREF60) shows that most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game. These results demonstrate both the relative effectiveness of training a Seq2Seq model on Go-Explore trajectories, but they also indicate that additional effort needed for designing reinforcement learning algorithms that effectively generalize to unseen games. Output:
[ "How better does new approach behave than existing solutions?" ]
task461-b5f323f30ceb462eae65505b0767865a
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with and labels for classification as stage one. Our dataset contains 1313 tweet with positive label and 1887 tweets with a negative label . We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc. Output:
[ "Are the tweets specific to a region?" ]
task461-01c3ea147efb42778de6d3f254b31f32
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically. It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below. Output:
[ "How did they gather the data?" ]
task461-756ca72783f046e6b6e70d40db8c1626
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: All baselines are Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers. For initial multi-task experiments, all model parameters were shared BIBREF12, but performance was down by multiple BLEU points compared to the baselines. Output:
[ "What baselines non-adaptive baselines are used?" ]
task461-87f737a56af04ed38ab8f680812893be
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We first experiment with a suite of non-neural models, including Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees. We finally experiment with neural models, although our dataset is relatively small. We train both a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21) with parallel filters of size 2 and 3, as these have been shown to be effective in the literature on emotion detection in text (e.g., BIBREF22, BIBREF23). Output:
[ "What supervised methods are used?" ]
task461-f6bf5eab5a3b4973b14c7e7d2ebbff51
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: These powerful ideas from the NLP community have been adapted by researchers from other domains to tasks beyond representation of words, including relational entities BIBREF1 , BIBREF2 , general text-based attributes BIBREF3 , descriptive text of images BIBREF4 , nodes in graph structure of networks BIBREF5 , and queries BIBREF6 , to name a few. Output:
[ "What domains are considered that have such large vocabularies?" ]
task461-50fffa5a22c9497b99d62c6d883f81d5
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: If we take the best psr ensemble trained on CBT as a baseline, improving the model architecture as in BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , continuing to use the original CBT training data, lead to improvements of INLINEFORM0 and INLINEFORM1 absolute on named entities and common nouns respectively. By contrast, inflating the training dataset provided a boost of INLINEFORM2 while using the same model. Output:
[ "How large are the improvements of the Attention-Sum Reader model when using the BookTest dataset?" ]
task461-819668e9fded4db68c07f73e3f66e013
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. Output:
[ "Did the authors use a crowdsourcing platform?" ]
task461-ce7ef2d4dc1f4828ab40e208aa742617
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We created eight different classifiers, each of which used one of the following eight features available from a tweet as retrieved from a stream of the Twitter API: User location (uloc): This is the location the user specifies in their profile. While this feature might seem a priori useful, it is somewhat limited as this is a free text field that users can leave empty, input a location name that is ambiguous or has typos, or a string that does not match with any specific locations (e.g., “at home”). Looking at users' self-reported locations, Hecht et al. BIBREF49 found that 66% report information that can be translated, accurately or inaccurately, to a geographic location, with the other 34% being either empty or not geolocalisable. User language (ulang): This is the user's self-declared user interface language. The interface language might be indicative of the user's country of origin; however, they might also have set up the interface in a different language, such as English, because it was the default language when they signed up or because the language of their choice is not available. Timezone (tz): This indicates the time zone that the user has specified in their settings, e.g., “Pacific Time (US & Canada)”. When the user has specified an accurate time zone in their settings, it can be indicative of their country of origin; however, some users may have the default time zone in their settings, or they may use an equivalent time zone belonging to a different location (e.g., “Europe/London” for a user in Portugal). Also, Twitter's list of time zones does not include all countries. Tweet language (tlang): The language in which a tweet is believed to be written is automatically detected by Twitter. It has been found to be accurate for major languages, but it leaves much to be desired for less widely used languages. Twitter's language identifier has also been found to struggle with multilingual tweets, where parts of a tweet are written in different languages BIBREF50 . Offset (offset): This is the offset, with respect to UTC/GMT, that the user has specified in their settings. It is similar to the time zone, albeit more limited as it is shared with a number of countries. User name (name): This is the name that the user specifies in their settings, which can be their real name, or an alternative name they choose to use. The name of a user can reveal, in some cases, their country of origin. User description (description): This is a free text where a user can describe themselves, their interests, etc. Tweet content (content): The text that forms the actual content of the tweet. The use of content has a number of caveats. One is that content might change over time, and therefore new tweets might discuss new topics that the classifiers have not seen before. Another caveat is that the content of the tweet might not be location-specific; in a previous study, Rakesh et al. BIBREF51 found that the content of only 289 out of 10,000 tweets was location-specific. Output:
[ "What are the eight features mentioned?" ]
task461-7fc5c483169c4183b780f7ee9d1f4ff7
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries. The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels. We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$ tweets) . For more information about the dataset, readers are referred to BIBREF22. We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24. We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. Output:
[ "What datasets are used in training?" ]
task461-0296e959f7544a839cee4922a7cd5070
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: The final code-mixed tweets were forwarded to a group of three annotators who were university students and fluent in both English and Hindi. Output:
[ "How many annotators tagged each text?" ]
task461-f22ca3d92fd140c1974134199ffde1b5
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats. Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer); Untargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language. Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH). Individual (IND): Posts targeting an individual. It can be a a famous person, a named individual or an unnamed participant in the conversation. Insults and threats targeted at individuals are often defined as cyberbulling. Group (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech. Other (OTH): The target of these offensive posts does not belong to any of the previous two categories (e.g. an organization, a situation, an event, or an issue). Output:
[ "What kinds of offensive content are explored?" ]
task461-275dbc78683f43d381e07e99d8194b9c
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: The main reason is that when the controllability is strong, the change of selection will directly affect the text realization so that a tiny error of content selection might lead to unrealistic text. If the selector is not perfectly trained, the fluency will inevitably be influenced. When the controllability is weaker, like in RS, the fluency is more stable because it will not be affected much by the selection mask. Output:
[ "Does the performance necessarily drop when more control is desired?" ]
task461-4ee90b97aad141b9983d04633cef39b5
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Table TABREF14 presents the distribution of the tweets by country before and after the filtering process. A large portion of the samples is from India because the MeToo movement has peaked towards the end of 2018 in India. There are very few samples from Russia likely because of content moderation and regulations on social media usage in the country. Output:
[ "Do the tweets come from a specific region?" ]
task461-b94c237118f24f069bdf4c81c3aaa68e
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We use the Bayesian model of garg2012unsupervised as our base monolingual model. The semantic roles are predicate-specific. Output:
[ "What does an individual model consist of?" ]
task461-64fa7debe662408f9015b756e7e2608e
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Sentiment Classification We first conduct a multi-task experiment on sentiment classification. We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets. All the datasets in each task are partitioned randomly into training set, development set and testing set with the proportion of 70%, 10% and 20% respectively. Transferability of Shared Sentence Representation With attention mechanism, the shared sentence encoder in our proposed models can generate more generic task-invariant representations, which can be considered as off-the-shelf knowledge and then be used for unseen new tasks. Introducing Sequence Labeling as Auxiliary Task A good sentence representation should include its linguistic information. Therefore, we incorporate sequence labeling task (such as POS Tagging and Chunking) as an auxiliary task into the multi-task learning framework, which is trained jointly with the primary tasks (the above 16 tasks of sentiment classification). Output:
[ "What tasks did they experiment with?" ]
task461-6846585edc9b459c84ba563ca093bc2b
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Recently bhat-EtAl:2017:EACLshort provided a CS dataset for the evaluation of their parsing models which they trained on the Hindi and English Universal Dependency (UD) treebanks. We extend this dataset by annotating 1,448 more sentences. Output:
[ "How big is the provided treebank?" ]
task461-868aa5c5b5ad45998fbc9be4cc816209
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. Output:
[ "Do they look for inconsistencies between different languages' annotations in UniMorph?" ]
task461-ecc6941c108046a5b153d12ba5525a81
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We use the following three evaluation metrics: Phoneme Error Rate (PER) is the Levenshtein distance between the predicted phoneme sequences and the gold standard phoneme sequences, divided by the length of the gold standard phoneme sequences. Word Error Rate (WER) is the percentage of words in which the predicted phoneme sequence does not exactly match the gold standard phoneme sequence. Word Error Rate 100 (WER 100) is the percentage of words in the test set for which the correct guess is not in the first 100 guesses of the system. Output:
[ "what evaluation metrics were used?" ]
task461-b857a1edda254f3aa41a466d79521487
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Many practical fake news detection algorithms use a kind of semantic side information, such as whether the generated text is factually correct, in addition to its statistical properties. Although statistical side information would be straightforward to incorporate in the hypothesis testing framework, it remains to understand how to cast such semantic knowledge in a statistical decision theory framework. Output:
[ "What semantic features help in detecting whether a piece of text is genuine or generated? of " ]
task461-cbb1d8b5226c43b982f37ebb84b088f9
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . Output:
[ "what language does this paper focus on?" ]
task461-d61c814ca94d42b494867dde3e6a6429
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We evaluate the proposed transfer learning techniques in two non-English language pairs of WMT 2019 news translation tasks: French$\rightarrow $German and German$\rightarrow $Czech. Output:
[ "Are experiments performed with any other pair of languages, how did proposed method perform compared to other models?" ]
task461-3c540e6cf0a145cfa6ea6c1e0d8e1a01
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. Output:
[ "What are the differences in the use of emojis between gang member and the rest of the Twitter population?" ]
task461-1f15535588db449b9fa9149e05c47058
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: $\mathit {\texttt {+}PPMI}$ only falters on the RW and analogy tasks, and we hypothesize this is where $\mathit {\texttt {-}PMI}$ is useful: in the absence of positive information, negative information can be used to improve rare word representations and word analogies. Output:
[ "What are the disadvantages to clipping negative PMI?" ]
task461-31662df977ba4feeb6d34991a678c793
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Out of 43 teams our system ranked 421st with an official F1-score of 0.2905 on the test set. Although our model outperforms the baseline in the validation set in terms of F1-score, we observe important drops for all metrics compared to the test set, showing that the architecture seems to be unable to generalize well. Output:
[ "What were their results on the test set?" ]
task461-d4816b85a7cd42f5b5a6ea501ec2f6eb
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: For the benchmarks, we selected five systems. We picked first the langid.py library which is frequently used to compare systems in the literature. Since our work is in neural-network LID, we selected two neural network systems from the literature, specifically the encoder-decoder EquiLID system of BIBREF6 and the GRU neural network LanideNN system of BIBREF10. Finally, we included CLD2 and CLD3, two implementations of the Naïve Bayes LID software used by Google in their Chrome web browser BIBREF4, BIBREF0, BIBREF8 and sometimes used as a comparison system in the LID literature BIBREF7, BIBREF6, BIBREF8, BIBREF2, BIBREF10. Output:
[ "Which existing language ID systems are tested?" ]
task461-9af1f859159c4f8d8b5f2572001f2667
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Our analysis shows that attention models traditional alignment in some cases more closely while it captures information beyond alignment in others. For instance, attention agrees with traditional alignments to a high degree in the case of nouns. However, it captures other information rather than only the translational equivalent in the case of verbs. One can see that attention loss varies substantially across different POS tags. In particular, we focus on the cases of NOUN and VERB which are the most frequent POS tags in the dataset. As shown, the attention of NOUN is the closest to alignments on average. But the average attention loss for VERB is almost two times larger than the loss for NOUN. Output:
[ "What useful information does attention capture?" ]
task461-a6774f15b819408689cb42460150f6aa
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We proposed joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM) BIBREF9, BIBREF10 as basic units. Our models can automatically extract the key elements from the sexual harassment stories and at the same time categorize the stories in different dimensions. The proposed models outperformed the single task models, and achieved higher than previously reported accuracy in classifications of harassment forms BIBREF6. Output:
[ "What model did they use?" ]
task461-29e018e836e743b6a44dfeeae74fa169
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: In addition to it, we extract three hand-crafted polarity scores, which are minimum, mean, and maximum polarity scores, from each review. These polarity scores of words are computed as in (DISPLAY_FORM4). For example, if a review consists of five words, it would have five polarity scores and we utilise only three of these sentiment scores as mentioned. Lastly, we concatenate these three scores to the averaged word vector per review. That is, each review is represented by the average word vector of its constituent word embeddings and three supervised scores. We then feed these inputs into the SVM approach. The flowchart of our framework is given in Figure FIGREF11. When combining the unsupervised features, which are word vectors created on a word basis, with supervised three scores extracted on a review basis, we have better state-of-the-art results. Output:
[ "Which hand-crafted features are combined with word2vec?" ]
task461-5861aa1e69004d0e9119b4448db4332f
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent INLINEFORM0 words across all corpora in a data set. Output:
[ "Which corpora do they use?" ]
task461-73f8d90b06bd4fb4b9a1ce188c1918eb
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). Output:
[ "What topics are included in the debate data?" ]
task461-bee1036b9f914e679615991c81fc7843
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: The first phase is to extract a knowledge graph from the story that depicts locations, characters, objects, and the relations between these entities. We present two techniques. The first uses neural question-answering technique to extract relations from a story text. The second, provided as a baseline, uses OpenIE5, a commonly used rule-based information extraction technique. For the sake of simplicity, we considered primarily the location-location and location-character/object relations, represented by the “next to” and “has” edges respectively in Figure FIGREF4. We instead propose a new method that leverages models trained for context-grounded question-answering tasks to do entity extraction with no task dependent data or fine-tuning necessary. Our method, dubbed AskBERT, leverages the Question-Answering (QA) model ALBERT BIBREF15. AskBERT consists of two main steps as shown in Figure FIGREF7: vertex extraction and graph construction. The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer. The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer. The next step is graph construction. Typical interactive fiction worlds are usually structured as trees, i.e. no cycles except between locations. Using this fact, we use an approach that builds a graph from the vertex set by one relation—or edge—at a time. Once again using the entire story plot as context, we query the ALBERT-QA model picking a random starting location $x$ from the set of vertices previously extracted.and asking the questions “What location can I visit from $x$?” and “Who/What is in $x$?”. The methodology for phrasing these questions follows that described for the vertex extraction. The answer given by the QA model is matched to the vertex set by picking the vertex $u$ that contains the best word-token overlap with the answer. Relations between vertices are added by computing a relation probability on the basis of the output probabilities of the answer given by the QA model. We compared our proposed AskBERT method with a non-neural, rule-based approach. This approach is based on the information extracted by OpenIE5, followed by some post-processing such as named-entity recognition and part-of-speech tagging. OpenIE5 combines several cutting-edge ideas from several existing papers BIBREF17, BIBREF18, BIBREF19 to create a powerful information extraction tools. For a given sentence, OpenIE5 generates multiple triples in the format of $\langle entity, relation, entity\rangle $ as concise representations of the sentence, each with a confidence score. These triples are also occasionally annotated with location information indicating that a triple happened in a location. As in the neural AskBERT model, we attempt to extract information regarding locations, characters, and objects. The entire story plot is passed into the OpenIE5 and we receive a set of triples. The location annotations on the triples are used to create a set of locations. We mark which sentences in the story contain these locations. POS tagging based on marking noun-phrases is then used in conjunction with NER to further filter the set of triples—identifying the set of characters and objects in the story. The graph is constructed by linking the set of triples on the basis of the location they belong to. While some sentences contain very explicit location information for OpenIE5 to mark it out in the triples, most of them do not. We therefore make the assumption that the location remains the same for all triples extracted in between sentences where locations are explicitly mentioned. For example, if there exists $location A$ in the 1st sentence and $location B$ in the 5th sentence of the story, all the events described in sentences 1-4 are considered to take place in $location A$. The entities mentioned in these events are connected to $location A$ in the graph. Output:
[ "How is the information extracted?" ]
task461-0ce6da4156d44a27ac74b36e7f0bd91d
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. Output:
[ "Who are the experts?" ]
task461-4123d90e69644818a56bb744702e64a1
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. Output:
[ "What evidence do the authors present that the model can capture some biases in data annotation and collection?" ]
task461-1ddc358894664ae68550f95296ed923d
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10. At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification. Output:
[ "How is quality of the citation measured?" ]
task461-1c380e65e27440059bed2943cbd8a760
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: The CRF module achieved the best result on the Thai sentence segmentation task BIBREF8 ; therefore, we adopt the Bi-LSTM-CRF model as our baseline. Output:
[ "Which deep learning architecture do they use for sentence segmentation?" ]
task461-f1404c01a9184998af997a2ea7266d8c
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: This paper experimented with two different models and compared them against each other. Output:
[ "what was the baseline?" ]
task461-e42f94df12c44149b40aefc15e2cc188
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: However, no prior work has explored the target of the offensive language, which is important in many scenarios, e.g., when studying hate speech with respect to a specific target. Output:
[ "What are the differences between this dataset and pre-existing ones?" ]
task461-2775f3d9607c4b809d739c372e16bca8
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Deep convolutional neural networks (CNNs) with 2D convolutions and small kernels BIBREF1, have achieved state-of-the-art results for several speech recognition tasks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. r Models: Our baseline CNN model BIBREF21 consists of 15 convolutional and one fully-connected layer. Output:
[ "Is model compared against state of the art models on these datasets?" ]
task461-d91c960bc1c2460495906e0b1a7ab442
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We find that all the factors we tested can qualitatively affect how a model generalizes on the question formation task. These factors are the type of recurrent unit, the type of attention, and the choice of sequential vs. tree-based model structure. Output:
[ "What architectural factors were investigated?" ]
task461-efc5ba5bd3574d1a98fe7b133e3c9500
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre. Output:
[ "What are the differences in the use of YouTube links between gang member and the rest of the Twitter population?" ]
task461-31334d104d74458083af31d58a267b46
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: An octave convolutional layer BIBREF0 factorizes the output feature maps of a convolutional layer into two groups. The resolution of the low-frequency feature maps is reduced by an octave – height and width dimensions are divided by 2. In this work, we explore spatial reduction by up to 3 octaves – dividing by $2^t$, where $t=1,2,3$ – and for up to 4 groups. We refer to such a layer as a multi-octave convolutional (MultiOctConv) layer, and an example with three groups and reductions of one and two octaves is depicted in Fig. FIGREF1. Output:
[ "How is octave convolution concept extended to multiple resolutions and octaves?" ]
task461-3fbc5820db9542e183f1c3b6ed4a7b12
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\lbrace e_1, \ldots , e_N\rbrace $ , a question on the document represented as another sequence of words and an answer to the question. Output:
[ "How is knowledge stored in the memory?" ]
task461-772a451eefef4443b5aef8cecf6c7848
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Slot Filling is an information extraction task which has become popular in the last years BIBREF3 . The task aims at extracting information about persons, organizations or geo-political entities from a large collection of news, web and discussion forum documents. Output:
[ "What is the task of slot filling?" ]
task461-d559042b44c54f3c9ce2585e2c18305d
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: A total of 2,50,000 tweets over a period of August 31st, 2015 to August 25th,2016 on Microsoft are extracted from twitter API BIBREF15 . The news on twitter about Microsoft and tweets regarding the product releases were also included. Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016 are obtained from Yahoo! Finance BIBREF16 . Output:
[ "What dataset is used to train the model?" ]
task461-4e9610d56ed14592a9df8d960027a037
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: A data collection process involving location neutral keywords used by gang members, with an expanded search of their retweet, friends and follower networks, led to identifying 400 authentic gang member profiles on Twitter. Our study discovered that the text in their tweets and profile descriptions, their emoji use, their profile images, and music interests embodied by links to YouTube music videos, can help a classifier distinguish between gang and non-gang member profiles. Output:
[ "How is the ground truth of gang membership established in this dataset?" ]
task461-a95b10081ce24fad80d8893b38424311
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. Output:
[ "What is an \"answer style\"?", "What do they mean by answer styles?", "Is there exactly one \"answer style\" per dataset?" ]
task461-5f8ea527d291489e9b9f685bad7d8a58
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5. Output:
[ "Do they compare to other models appart from HAN?" ]
task461-419e40315e0049a4b170a7a19b8bc96e
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B. Output:
[ "How do they define similar equations?" ]
task461-b08c592dea3d401ba0f47bb3b64afb2b
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We combine 13k questions gathered from this pipeline with an additional 3k questions with yes/no answers from the NQ training set to reach a total of 16k questions. Output:
[ "what is the size of BoolQ dataset?" ]
task461-c24ac5e551aa4225a96f2f8b0c9a61ac
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time Output:
[ "What types of various community texts have been investigated for exploring global structure of binomials?" ]
task461-9ff6962c990b40f6a91630aaa5aea580
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. Output:
[ "What is the architecture of their model?" ]
task461-24100d0dd512467c980cd16f10e5c6b6
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Conditional Random Fields Conditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. BiLSTM-CRF Prior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. Multi-Task Learning Multi-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning. BioBERT Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17. Output:
[ "What baseline systems are proposed?" ]
task461-de1a5032515d43fdae8217164ad0128a
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. Output:
[ "Do all questions in the dataset allow the answers to pick from 2 options?" ]
task461-e617a6c57db44e9892931c8f9554a131
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Interesting prior work on quantifying social norm violation has taken a heavily data-driven focus BIBREF8 , BIBREF9 . Output:
[ "Does this paper propose a new task that others can try to improve performance on?" ]
task461-dbd6b463a3ad4406be9c38a33c4ac1b1
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: For every input data point and possible target class, LRP delivers one scalar relevance value per input variable, hereby indicating whether the corresponding part of the input is contributing for or against a specific classifier decision, or if this input variable is rather uninvolved and irrelevant to the classification task at all. Output:
[ "Does the LRP method work in settings that contextualize the words with respect to one another?" ]
task461-ec8b5432a1364657a0f95546f554259d
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 . Output:
[ "What clustering algorithm is used on top of the VerbNet-specialized representations?" ]
task461-62fac629a46048bfbc1dd76695d5eb04
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: To understand the reason behind the drop in performance, first we need to understand how BERT processes input text data. BERT uses WordPiece tokenizer to tokenize the text. WordPiece tokenizer utterances based on the longest prefix matching algorithm to generate tokens . The tokens thus obtained are fed as input of the BERT model. When it comes to tokenizing noisy data, we see a very interesting behaviour from WordPiece tokenizer. Owing to the spelling mistakes, these words are not directly found in BERT’s dictionary. Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance. Output:
[ "What is the reason behind the drop in performance using BERT for some popular task?" ]
task461-d857b12d87fb40038c0c108c05224b83
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: In addition, we propose road exam as a new metric to reveal a model's robustness against exposure bias. Output:
[ "What is the road exam metric?" ]
task461-bf969ab5135d4651959cc7389f2f943d
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: This method, which we refer to as progressive dynamic hurdles (PDH), allows models that are consistently performing well to train for more steps. It begins as ordinary tournament selection evolutionary architecture search with early stopping, with each child model training for a relatively small $s_0$ number of steps before being evaluated for fitness. However, after a predetermined number of child models, $m$ , have been evaluated, a hurdle, $h_0$ , is created by calculating the the mean fitness of the current population. For the next $m$ child models produced, models that achieve a fitness greater than $h_0$ after $s_0$ train steps are granted an additional $s_1$ steps of training and then are evaluated again to determine their final fitness. Once another $m$ models have been considered this way, another hurdle, $h_1$ , is constructed by calculating the mean fitness of all members of the current population that were trained for the maximum number of steps. For the next $m$ child models, training and evaluation continues in the same fashion, except models with fitness greater than $m$0 after $m$1 steps of training are granted an additional $m$2 number of train steps, before being evaluated for their final fitness. This process is repeated until a satisfactory number of maximum training steps is reached. Output:
[ "what is the proposed Progressive Dynamic Hurdles method?" ]
task461-951540a2ea234ede822c847461e79206
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Building Extractive CNN/Daily Mail Output:
[ "What is the problem with existing metrics that they are trying to address?" ]
task461-7c7fadbbedeb4079827bc1a79af5c574
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents. Output:
[ "Which real-world datasets did they use?" ]
task461-875736d98f6e4cdfb528f7e69ea88abc
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Curriculum Plausibility ::: Conversational Attributes ::: Specificity A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. Curriculum Plausibility ::: Conversational Attributes ::: Repetitiveness Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. Curriculum Plausibility ::: Conversational Attributes ::: Continuity A coherent response not only responds to the given query, but also triggers the next utterance. Curriculum Plausibility ::: Conversational Attributes ::: Model Confidence Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. Curriculum Plausibility ::: Conversational Attributes ::: Query-relatedness A conversation is considered to be coherent if the response correlates well with the given query. Output:
[ "What five dialogue attributes were analyzed?" ]
task461-8bab2fafbbcc407c8d2038dc3488ef64
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: To evaluate the responses generated by all compared methods, we compute the following automatic metrics on our test set: BLEU: BLEU-n measures the average n-gram precision on a set of reference responses. We report BLEU-n with n=1,2,3,4. Distinct-1 & distinct-2 BIBREF5: We count the numbers of distinct uni-grams and bi-grams in the generated responses and divide the numbers by the total number of generated uni-grams and bi-grams in the test set. These metrics can be regarded as an automatic metric to evaluate the diversity of the responses. Output:
[ "What automatic metrics are used?" ]
task461-37cd4302042041bfb545af729db8b64e
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Several recent works have investigated jointly training the acoustic model with a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13, but these works did not evaluate their system on speech enhancement metrics. Indeed, our internal experiments show that without access to the clean data, joint training severely harms performance on these metrics. Output:
[ "Which frozen acoustic model do they use?" ]
task461-e931c17362654f3489f9ab6b4b8c4a92
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We measure how contextual a word representation is using three different metrics: self-similarity, intra-sentence similarity, and maximum explainable variance. Output:
[ "What experiments are proposed to test that upper layers produce context-specific embeddings?" ]
task461-66c859ff7553408b8263a6491af7bb86
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set. Output:
[ "Which review dataset do they use?" ]
task461-3e9a6ee4caad45a1a3cd6d372e04a571
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter. The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us. Output:
[ "What are the differences in language use between gang member and the rest of the Twitter population?" ]
task461-a4974c120afa4a1fa5bc3f86dcbba45a
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: A quantitative measure is required to reliably evaluate the achieved improvement. One of the methods proposed to measure the interpretability is the word intrusion test BIBREF41 . But, this method is expensive to apply since it requires evaluations from multiple human evaluators for each embedding dimension. In this study, we use a semantic category-based approach based on the method and category dataset (SEMCAT) introduced in BIBREF27 to quantify interpretability. Specifically, we apply a modified version of the approach presented in BIBREF40 in order to consider possible sub-groupings within the categories in SEMCAT. Output:
[ "What experiments do they use to quantify the extent of interpretability?" ]
task461-1d3c9801c6ec4953b684ed523bfe79de
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: Baselines. We compare our approach (FacTweet) to the following set of baselines: [leftmargin=4mm] LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier. Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts. LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently. LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier. FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets. Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline. Output:
[ "What baselines do they compare to?" ]
task461-ae70099ff62f41afbca25bc277d6efb1
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: In order to evaluate the models trained on generated data, we manually annotated a named entities dataset comprising 53453 tokens and 2566 sentences selected from over 250 news texts from ilur.am. Output:
[ "what is the source of the news sentences?" ]
task461-65c4911e0146499f8357f7b3f123d995
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: We also evaluate all five models on downstream tasks from the VecEval suite BIBREF13 , using only the tasks for which training and evaluation data is freely available: chunking, sentiment and question classification, and natural language identification (NLI). The default settings from the suite are used, but we run only the fixed settings, where the embeddings themselves are not tunable parameters of the models, forcing the system to use only the information already in the embeddings. Finally, we use LV-N, LV-M, and FT to generate OOV word representations for the following words: 1) “hellooo”: a greeting commonly used in instant messaging which emphasizes a syllable. 2) “marvelicious”: a made-up word obtained by merging “marvelous” and “delicious”. 3) “louisana”: a misspelling of the proper name “Louisiana”. 4) “rereread”: recursive use of prefix “re”. 5) “tuzread”: made-up prefix “tuz”. Output:
[ "How do they evaluate their resulting word embeddings?" ]
task461-5cd6f41b526146708daa0ca5c038e4c3
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16. Output:
[ "Does the paper discuss previous models which have been applied to the same task?" ]
task461-52100d99bcab45cfbb211ec0e534034c
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Positive Example 1 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? Positive Example 2 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language are the tweets? Negative Example 1 - Input: Using this annotation model, we create a new large publicly available dataset of English tweets. Output: In what language is tweets? Negative Example 2 - Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: What is the size of the dataset? Now complete the following example - Input: In the experiments, we compare model performance between GluonCV/NLP and other open source implementations with Caffe, Caffe2, Theano, and TensorFlow, including ResNet BIBREF8 and MobileNet BIBREF9 for image classification (ImageNet), Faster R-CNN BIBREF10 for object detection (COCO), Mask R-CNN BIBREF11 for instance segmentation, Simple Pose BIBREF12 for pose estimation (COCO), textCNN BIBREF13 for sentiment analysis (TREC), and BERT BIBREF14 for question answering (SQuAD 1.1), sentiment analysis (SST-2), natural langauge inference (MNLI-m), and paraphrasing (MRPC). Output:
[ "Do they experiment with the toolkits?" ]
task461-b2d05b5a219646cc8e98b47198a4f8e0