input
stringlengths 3.23k
13.3k
| output
sequencelengths 1
3
| id
stringlengths 40
40
|
---|---|---|
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer.
Output:
| [
"Is new approach tested against state of the art?"
] | task461-1d306c4a75d4487d926c2f46ad115465 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For each variable in encoding set, learn the new embeddings using the embeddings train set . Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics).
Output:
| [
"How do their train their embeddings?"
] | task461-835a31a0213144abad6fdd55d4b879a0 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: To systematically evaluate the importance of the clinical sentiment values extracted from the free text in EHRs, we first build a baseline model using the structured features, which are similar to prior studies on readmission risk prediction BIBREF6.
Output:
| [
"Do they compare to previous models?"
] | task461-1de9ed6844344071b4c5b21e99d9bafa |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We have evaluated our framework using a synthetic literal set (S-Lite) and a real literal set (R-Lite) from DBpedia BIBREF0 .
Output:
| [
"What KB is used in this work?"
] | task461-1118a1c30b0742ed8bef02ea956a92c2 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our baseline system (Baseline_1850K) is taken from BIBREF13 . It consists of a DNN trained to predict subword targets within the keywords. The input to the DNN consists of a sequence with INLINEFORM0 frames of left and INLINEFORM1 frames of right context; each with a stride of INLINEFORM2 . The topology consists of a 1-D convolutional layer with 92 filters (of shape 8x8 and stride 8x8), followed by 3 fully-connected layers with 512 nodes and a rectified linear unit activation each. A final softmax output predicts the 7 subword targets, obtained from the same forced alignment process described in SECREF5 . This results in the baseline DNN containing 1.7M parameters, and performing 1.8M multiply-accumulate operations per inference (every 30ms of streaming audio). A keyword spotting score between 0 and 1 is computed by first smoothing the posterior values, averaging them over a sliding window of the previous 100 frames with respect to the current INLINEFORM3 ; the score is then defined as the largest product of the smoothed posteriors in the sliding window as originally proposed in BIBREF6 .
Output:
| [
"What previous approaches are considered?"
] | task461-e2f230286203448b87c92c4c241a24cb |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback.
Output:
| [
"How are workers trained?"
] | task461-321074f7db6c40cf94409db0a4d587af |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm.
Output:
| [
"How do they model a city description using embeddings?"
] | task461-3f40d908dc304e1aa3b9660fe44315ad |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise.
Output:
| [
"How they evaluate their approach?"
] | task461-bb6fb4b72e5c483a9aca22f3ae4f2716 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances.
Output:
| [
"Where is the dataset from?"
] | task461-f05933c2671c4319b1dacd0f89137b2d |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: While $\beta $-VAE offers regularizing the ELBO via an additional coefficient $\beta \in {\rm I\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term,
where $C\!\! \in \!\! {\rm I\!R}^+$ and $| . |$ denotes the absolute value.
Output:
| [
"How does explicit constraint on the KL divergence term that authors propose looks like?"
] | task461-6c4ddcedb74b49f287ac45a2a48517c1 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The reviews we used have been extracted from TripAdvisor and originally proposed in BIBREF10 , BIBREF11 .
Output:
| [
"Where are the hotel reviews from?"
] | task461-f50f0c5eaa064984a46290060a8e5b8d |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: It is a well known fact that NLP applications in industrial settings often have to deal with the noisy data. There are different kinds of possible noise namely non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages to name a few. Such noisy data is a hallmark of user generated text content and commonly found on social media, chats, online reviews, web forums to name a few.
Output:
| [
"What kind is noise is present in typical industrial data?"
] | task461-f309076b2fd1443dba68ebaf7cd52022 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \ldots )$ , we add a triple $(S; P; O_i)$ to this table.
Output:
| [
"Are the OpenIE extractions all triples?"
] | task461-53fa0429e68342238a0e764f5e4165ed |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Specifically, we group the models based on the objective function it optimizes. We believe this work can aid the understanding of the existing literature. We believe that the performance of these models is highly dependent on the objective function it optimizes – predicting adjacent word (within-tweet relationships), adjacent tweet (inter-tweet relationships), the tweet itself (autoencoder), modeling from structured resources like paraphrase databases and weak supervision.
Output:
| [
"How do they encourage understanding of literature as part of their objective function?"
] | task461-f6dc632606ad4df2af4b12b733f366b9 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns.
Output:
| [
"How does this model overcome the assumption that all words in a document are generated from a single event?"
] | task461-69d9270a0da04f1dba5a77847c332db5 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We have evaluated the output of Nefnir against a reference corpus of 21,093 tokens and their correct lemmas.
Samples for the reference corpus were extracted from two larger corpora, in order to obtain a diverse vocabulary:
Output:
| [
"Which dataset do they use?"
] | task461-5b1e24f76338472ca79b889eaecb39e4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: To verify the effectiveness of our proposed model, we conduct multiple experiments on three Chinese Machine Reading Comprehension datasets, namely CMRC-2017 BIBREF17 , People's Daily (PD) and Children Fairy Tales (CFT) BIBREF2 Besides, we also use the Children's Book Test (CBT) dataset BIBREF1 to test the generalization ability in multi-lingual case.
Output:
| [
"which public datasets were used?"
] | task461-ce2f07363e6b44e0b8422f77ed61dfcb |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively.
Output:
| [
"Do experiment results show consistent significant improvement of new approach over traditional CNN and RNN models?"
] | task461-5b7c272a6f1e4a24bd988c5082fd9271 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days.
Output:
| [
"What geometric properties do embeddings display?"
] | task461-51c7a4f9682d4b0bb42ddf974ccd6f98 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values.
Output:
| [
"How is order of binomials tracked across time?"
] | task461-3b245310c0c342639449a14993aa5a64 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Because of this, we do not claim that this dataset can be considered a ground truth.
Output:
| [
"How is the ground truth for fake news established?"
] | task461-637d72e6e8244067941293499f89e469 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24.
Output:
| [
"Is text-to-image synthesis trained is suppervized or unsuppervized manner?"
] | task461-297e51db342e4fe08ce9660ce3dbed4b |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The E2E NLG challenge dataset BIBREF21 is utilized in our experiments, which is a crowd-sourced dataset of 50k instances in the restaurant domain. Our models are trained on the official training set and verified on the official testing set.
Output:
| [
"What datasets did they use?"
] | task461-b81d715bca3844578e781694213e45bd |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence.
Output:
| [
"What classification approaches were experimented for this task?"
] | task461-9d6836bc40c54292828b7f25f650959e |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We explored different normalization techniques. FBanks with cepstral mean normalization (CMN) perform better than raw FBanks. We found using variance with mean normalization (CMVN) unnecessary for the task. Using deltas and delta-deltas improves model, so we used them in other experiments. Models trained with spectrogram features converge slower and to worse minimum, but the difference when using CMN is not very big compared to FBanks.
Output:
| [
"What normalization techniques are mentioned?"
] | task461-dbb4573e97a94759863ad7884f10e0f6 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Paraphrases can be obtained by translating an English string into a foreign language and then back-translating it into English.
Output:
| [
"It looks like learning to paraphrase questions, a neural scoring model and a answer selection model cannot be trained end-to-end. How are they trained?"
] | task461-82b39cbfc2f1450caa33db2b2a1c0e94 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 .
Output:
| [
"How do they represent input features of their model to train embeddings?"
] | task461-d3b8719a7d214c2798b4ab52efa38515 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our contribution leverages pre-trained word embeddings (GloVe, trained on wikipedia+gigaword corpus), the DepecheMood affective lexicon, and convolutional neural networks.
Output:
| [
"What embeddings do they use?",
"What is the source of their lexicon?"
] | task461-f61e1a1081a649c6b3485687b12d25a1 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\Delta }_s$ (for delta-softmax), which can be written as:
where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \in 1 \ldots N$ ignoring separators, or $\phi $, the empty set).
Output:
| [
"How is the delta-softmax calculated?"
] | task461-26436806b3b74a81908e325bf112865f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We collected all available comments in the stories from Reddit from August 2015.
Output:
| [
"what is the source of the new dataset?"
] | task461-55099a1b76da4a69ae3b05ad0652a516 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We decided to evaluate our model using weighted F1-score, i.e., the per-class F1 score is calculated and averaged by weighting each label by its support.
Output:
| [
"what evaluation metrics were used?"
] | task461-ab8a7fe4897c411b856164f3a937d4a6 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We therefore propose attention models to infer the latent context, i.e., the series of posts that trigger an intervention.
Output:
| [
"What type of latent context is used to predict instructor intervention?"
] | task461-c65d7f0f26b64c4dad6da4c77fe9d330 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The term `propaganda' derives from propagare in post-classical Latin, as in “propagation of the faith" BIBREF1, and thus has from the beginning been associated with an intentional and potentially multicast communication; only later did it become a pejorative term. It was pragmatically defined in the World War II era as “the expression of an opinion or an action by individuals or groups deliberately designed to influence the opinions or the actions of other individuals or groups with reference to predetermined ends" BIBREF2.
Output:
| [
"How is \"propaganda\" defined for the purposes of this study?"
] | task461-1f3ff6d88cb3441b86827ab35e62136f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: he first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . The second dataset (SCI) was a set of 220 sentences from the scientific literature. We sourced the sentences from the OA-STM corpus. In total 2247 triples were extracted. In total, 11262 judgements were obtained after running the annotation process.
Output:
| [
"What is the size of the released dataset?"
] | task461-63409bc6ae1c42778aca853ff528fdf0 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers.
Output:
| [
"How big is dataset of car-speak language?"
] | task461-60ae52ace30b42f99c38dc2fa5ce9104 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement.
Output:
| [
"Which variation provides the best results on this dataset?"
] | task461-4b0a93343be04beaaff2c338889872e7 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type.
Output:
| [
"Is the data all in Vietnamese?"
] | task461-7bb57b0fcb1943a3b864caee727158c0 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: After removing 135 questions during the validation, the final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%).
Output:
| [
"what is the size of their dataset?"
] | task461-8d1ae850844c46a1a7a9a121fc3d1b8f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Training was accomplished using the Kaldi Babel receipe using 198 hours of data in 10 languages (Bulgarian, Czech, French, German, Korean, Polish, Portuguese, Russian, Thai, Vietnamese) from GlobalPhone.
Output:
| [
"What languages are considered?"
] | task461-7b4ffa46c6a64843a98d434685362bdb |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: As well as refining the annotation guidelines, the development process trained annotators who were not security experts. The development and test sets for Darkode and Hack Forums were annotated by additional team members (five for Darkode, one for Hack Forums), and then every disagreement was discussed and resolved to produce a final annotation. The authors, who are researchers in either NLP or computer security, did all of the annotation.
Output:
| [
"Who annotated the data?"
] | task461-c8517033674e45779623c784bb86f574 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this context, we decided to study the three main types of architectures which have demonstrated promising results over traditional systems: 1) Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6 which uses Markov assumptions (i.e. conditional independence between predictions at each time step) to efficiently solve sequential problems by dynamic programming, 2) Attention-based methods BIBREF7, BIBREF8 which rely on an attention mechanism to perform non-monotonic alignment between acoustic frames and recognized acoustic units and 3) RNN-tranducer BIBREF0, BIBREF9, BIBREF10 which extends CTC by additionally modeling the dependencies between outputs at different steps using a prediction network analogous to a language model.
Output:
| [
"What are the existing end-to-end ASR approaches for the French language?"
] | task461-1e4004acf5964617a3f16bd6b6581c93 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: When predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21–25) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8–13). (Recall that higher valence means more positive sentiment.) In contrast, on the fear task, most submissions tended to assign higher scores to sentences with male noun phrases (23) as compared to the number of systems giving higher scores to sentences with female noun phrases (12). When predicting sadness, the number of submissions that mostly assigned higher scores to sentences with female noun phrases (18) is close to the number of submissions that mostly assigned higher scores to sentences with male noun phrases (16). These results are in line with some common stereotypes, such as females are more emotional, and situations involving male agents are more fearful BIBREF27 The majority of the systems assigned higher scores to sentences with African American names on the tasks of anger, fear, and sadness intensity prediction. On the joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names. These tendencies reflect some common stereotypes that associate African Americans with more negative emotions BIBREF28 .
Output:
| [
"Which race and gender are given higher sentiment intensity predictions?"
] | task461-6d97293a5c464b12b27cbf83d6154d96 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: is the standard word counting method whereby the feature vector represents the term frequency of the words in a sentence. is similar to BoW, except that it is derived by the counting of the words in the sentence weighted by individual word's term-frequency and inverse-document-frequency BIBREF31 . is derived from clustering of words-embeddings with k-means into 5000 clusters, and follow by BoW representation of the words in 5000 clusters.
Output:
| [
"What other non-neural baselines do the authors compare to? "
] | task461-957efcdfbe69468591fd230a4930ebb0 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: According the distribution of the sentiment score, the sentiment on Tesla is slightly skewed towards positive during the testing period.
Output:
| [
"Do the authors give any examples of major events which draw the public's attention and the impact they have on stock price?"
] | task461-011ff5eab6ed4f8f8966dee2d46c6d4a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In contrast to most encoder-decoder architectures, which are purely based on recurrent neural network (RNNs) framework, we construct an encoder with several convolutional layers BIBREF14 followed by NIN layers BIBREF15 as the lower part in the encoder and integrate them with deep bidirectional long short-term memory (Bi-LSTM) BIBREF16 at the higher part. On the decoder side, we use a standard deep unidirectional LSTM with global attention BIBREF13 that is calculated by a multi-layer perceptron (MLP) as described in Eq. EQREF2 .
Output:
| [
"Which architecture do they use for the encoder and decoder?"
] | task461-dfa99df0227943499d319a9afbfd4c86 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We downloaded 76 videos from a tutorial website about an image editing program .
Output:
| [
"What is the source of the triples?"
] | task461-156d561151c54808a03c25b55a2e696b |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15.
Output:
| [
"What is dialogue act recognition?"
] | task461-9d702d3c2bcb4a21b1be142f09a65575 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains. For what concerns the Italian scenario we first collected tweets with the Streaming API in a 3-week period (April 19th, 2019-May 5th, 2019), filtering those containing URLs pointing to Italian official newspapers websites as described in BIBREF22; these correspond to the list provided by the association for the verification of newspaper circulation in Italy (Accertamenti Diffusione Stampa). We instead referred to the dataset provided by BIBREF23 to obtain a set of tweets, collected continuously since January 2019 using the same Twitter endpoint, which contain URLs to 60+ Italian disinformation websites. In order to get balanced classes (April 5th, 2019-May 5th, 2019), we retained data collected in a longer period w.r.t to mainstream news. In both cases we filtered out articles with less than 50 tweets; overall this dataset contains $\sim $160k mainstream tweets, corresponding to 227 news articles, and $\sim $100k disinformation tweets, corresponding to 237 news articles. We provide in Figure FIGREF5 the distribution of articles according to distinct sources for both news domains.
Output:
| [
"Which two news domains are country-independent?"
] | task461-cb6dc522e0db4b8398837c10b25eda16 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models.
Output:
| [
"What conclusions do the authors draw from their detailed analyses?"
] | task461-fef2dc3524314b1fad8c4beea63fc265 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In ConvE, only $v_h$ and $v_r$ are reshaped and then concatenated into an input matrix which is fed to the convolution layer
Output:
| [
"Did the authors try stacking multiple convolutional layers?"
] | task461-d3d9c97511f94d58b069cf72c59837ac |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull. In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference.
Output:
| [
"How are the (possibly incorrect) coreference chains in the MT outputs annotated?"
] | task461-b49866353b284b8280e1b345734813f8 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Other Timex3 mentions are more ambiguous so we use a distant supervision approach. Phrases like “currently" and “today's" tend to occur near Events that overlap the current document creation time, while “ago" or “ INLINEFORM0 -years" indicate past events. These dominant temporal associations can be learned from training data and then used to label Timex3s. Finally, we define a logistic regression rule to predict entity DocRelTime values as well as specify a linear skip-chain factor over Event mentions and their nearest Timex3 neighbor, encoding the baseline system heuristic directly as an inference rule.
Output:
| [
"How do they obtain distant supervision rules for predicting relations?"
] | task461-e00980198fc94988980ed32fb930fca4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The embedding-wise convolution is to apply a convolution filter w $\in \mathbb {R}^{\mathcal {\\}k*d}$ to a window of $k$ word embeddings to generate a new feature, i.e., summarizing a local context of $k$ words. By applying the convolutional filter to all possible windows in the sentence, a feature map $c$ will be generated.
Output:
| [
"What supports the claim that injected CNN into recurent units will enhance ability of the model to catch local context and reduce ambiguities?"
] | task461-2444e4150cc049de85849f90d075c350 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: To test if the learned representations can separate phonetic categories, we use a minimal pair ABX discrimination task BIBREF19 , BIBREF20 . For the within-speaker task, all the phones triplets belong to the same speaker (e.g. $x$3 ) Finally the scores for every pair of central phones are averaged and subtracted from 1 to yield the reported within-talker ABX error rate. For the across-speaker task, $x$4 and $x$5 belong to the same speaker, and $x$6 to a different one (e.g. $x$7 ). The scores for a given minimal pair are first averaged across all of the pairs of speakers for which this contrast can be made. As above, the resulting scores are averaged over all contexts over all pairs of central phones and converted to an error rate.
Output:
| [
"What is the metric that is measures in this paper?"
] | task461-e9d437b6b0804c7ebfc3c16ad0abf906 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We then evaluate the ranks of the target entities $t$ with the commonly used mean rank (MR), mean reciprocal rank (MRR), as well as Hits@1, Hits@3, and Hits@10.
Output:
| [
"what was the evaluation metrics studied in this work?"
] | task461-a651ad3108b245ddaf46910bd822d02d |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The corpus of supervisor assessment has 26972 sentences.
Output:
| [
"What is the size of the real-life dataset?"
] | task461-849e27dc2969403899d98113ad439027 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our video question answering task is novel and to our knowledge, no model has been designed specifically for this task. As a first step towards solving this problem, we evaluated the performance of state-of-the-art models developed for other QA tasks, including a sentence-level prediction task and two segment retrieval tasks. Baselines ::: Baseline1: Sentence-level prediction
Given a transcript (a sequence of sentences) and a question, Baseline1 predicts (starting sentence index, ending sentence index). The model is based on RaSor BIBREF13, which has been developed for the SQuAD QA task BIBREF6. RaSor concatenates the embedding vectors of the starting and the ending words to represent a span. Following this idea, Baseline1 represents a span of sentences by concatenating the vectors of the starting and ending sentences. The left diagram in Fig. FIGREF15 illustrates the Baseline1 model.
Model. The model takes two inputs, a transcript, $\lbrace s_1, s_2, ... s_n\rbrace $ where $s_i$ are individual sentences and a question, $q$. The output is the span scores, $y$, the scores over all possible spans. GLoVe BIBREF14 is used for the word representations in the transcript and the questions. We use two bi-LSTMs BIBREF15 to encode the transcript.
where n is the number of sentences . The output of Passage-level Encoding, $p$, is a sequence of vector, $p_i$, which represents the latent meaning of each sentence. Then, the model combines each pair of sentence embeddings ($p_i$, $p_j$) to generate a span embedding.
where [$\cdot $,$\cdot $] indicates the concatenation. Finally, we use a one-layer feed forward network to compute a score between each span and a question.
In training, we use cross-entropy as an objective function. In testing, the span with the highest score is picked as an answer. Baselines ::: Baseline2: Segment retrieval
We also considered a simpler task by casting our problem as a retrieval task. Specifically, in addition to a plain transcript, we also provided the model with the segmentation information which was created during the data collection phrase (See Section. SECREF3). Note that each segments corresponds to a candidate answer. Then, the task is to pick the best segment for given a query. This task is easier than Baseline1's task in that the segmentation information is provided to the model. Unlike Baseline1, however, it is unable to return an answer span at various granularities. Baseline2 is based on the attentive LSTM BIBREF17, which has been developed for the InsuranceQA task. The right diagram in Fig. FIGREF15 illustrates the Baseline2 model.
Model. The two inputs, $s$ and $q$ represent the segment text and a question. The model first encodes the two inputs.
$h^s$ is then re-weighted using attention weights.
where $\odot $ denotes the element-wise multiplication operation. The final score is computed using a one-layer feed-forward network.
During training, the model requires negative samples. For each positive example, (question, ground-truth segment), all the other segments in the same transcript are used as negative samples. Cross entropy is used as an objective function.
Baselines ::: Baseline3: Pipeline Segment retrieval
We construct a pipelined approach through another segment retrieval task, calculating the cosine similarities between the segment and question embeddings. In this task however, we want to test the accuracy of retrieving the segments given that we first retrieve the correct video from our 76 videos. First, we generate the TF-IDF embeddings for the whole video transcripts and questions. The next step involves retrieving the videos which have the lowest cosine distance between the video transcripts and question. We then filter and store the top ten videos, reducing the number of computations required in the next step. Finally, we calculate the cosine distances between the question and the segments which belong to the filtered top 10 videos, marking it as correct if found in these videos. While the task is less computationally expensive than the previous baseline, we do not learn the segment representations, as this task is a simple retrieval task based on TF-IDF embeddings.
Model. The first two inputs are are the question, q, and video transcript, v, encoded by their TF-IDF vectors: BIBREF18:
We then filter the top 10 video transcripts(out of 76) with the minimum cosine distance, and further compute the TF-IDF vectors for their segments, Stop10n, where n = 10. We repeat the process for the corresponding segments:
selecting the segment with the minimal cosine distance distance to the query.
Output:
| [
"What baseline algorithms were presented?"
] | task461-06ee6cedc5234b9ab0583c0b9e66a51e |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We propose a listening comprehension model for the task defined above, the Attention-based Multi-hop Recurrent Neural Network (AMRNN) framework, and show that this model is able to perform reasonably well for the task. In the proposed approach, the audio of the stories is first transcribed into text by ASR, and the proposed model is developed to process the transcriptions for selecting the correct answer out of 4 choices given the question.
Output:
| [
"What approach does this work propose for the new task?"
] | task461-6eebd8b1f0f643329300663787a2c0f0 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our main contributions are (1) the publication of Turkish corpus for coarse-grained and fine-grained NER, and TC research, (2) six different versions of corpus according to noise reduction methodology and entity types, (3) an analysis of the corpus and (4) benchmark comparisons for NER and TC tasks against human annotators.
Output:
| [
"Did they experiment with the dataset on some tasks?"
] | task461-27cbb99423354d49943d482cd1d00484 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks.
Output:
| [
"What are the specific tasks being unified?"
] | task461-0e9b5a3b420b467b8ea528c9c28e45b1 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Results show that the CJFA encoder obtains significantly better phone classification accuracy than the VAE baseline and also than the CJFS encoder. These results are replicated for speaker recognition tasks.
Output:
| [
"Which approach out of two proposed in the paper performed better in experiments?"
] | task461-a72c1b158003495a85796c10a0d13fe5 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Given the fact that the research on offensive language detection has to a large extent been focused on the English language, we set out to explore the design of models that can successfully be used for both English and Danish.
Output:
| [
"What is the challenge for other language except English"
] | task461-fa63cd71f4294f1b85063046ff746b6a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: To better analyze model generalization to an unseen, new domain as well as model leveraging the out-of-domain sources, we propose a new architecture which is an extension of the ARED model. In order to better select, aggregate and control the semantic information, a Refinement Adjustment LSTM-based component (RALSTM) is introduced to the decoder side. The proposed model can learn from unaligned data by jointly training the sentence planning and surface realization to produce natural language sentences.
Output:
| [
"What is the difference of the proposed model with a standard RNN encoder-decoder?"
] | task461-091d93aa1228435393db73a4a6f1a15b |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time.
Output:
| [
"What improvement does the MOE model make over the SOTA on machine translation?"
] | task461-1b2e4fdeeb8b429b9b2d17881ad9eb58 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The Base is trained only on clean data while Base+Noise is trained on both the clean and the noisy data without noise handling. Global-CM uses a global confusion matrix for all noisy instances to model the noise as proposed by BIBREF3 and presented in Section SECREF3. The same architecture is used for Global-ID-CM, but the confusion matrix is initialized with the identity matrix (instead of Formula DISPLAY_FORM5) and only adapted during training.
The cluster-based models we propose in Section SECREF4 are Brown-CM and K-Means-CM. We experimented with numbers of clusters of 5, 10, 25 and 50. The models that select only the largest groups $G$ are marked as *-Freq and select either 30% or 50% of the clusters. The interpolation models have the postfix *-IP with $\lambda \in \lbrace 0.3, 0.5, 0.7\rbrace $ . The combination of both is named *-Freq-IP. As for all other hyperparameters, the choice was taken on the development set.
We implemented the Cleaning BIBREF15 and Dynamic-CM BIBREF14 models. Both were not developed for sequence labeling tasks and therefore needed to be adapted. For the Cleaning model, we followed the instructions by BIBREF3. The embedding and prediction components of the Dynamic-CM model were replaced according to our base model. The output of the dense layer was used as input to the dynamic matrix generation. We experimented with and without their proposed trace loss.
Output:
| [
"What is baseline used?"
] | task461-6d6b19589468437ba5d88cd8ba9c7ce4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: From the chart, it is clear that there is a higher chance of fake news coming from unverified accounts. Notice that accounts spreading viral tweets with fake news have, on average, a larger ratio of friends/followers. Figure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently.
Output:
| [
"What are the characteristics of the accounts that spread fake news?"
] | task461-9050803ca9794d5f8975ed864b8ad1f6 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding.
Output:
| [
"By how much do they outperform existing state-of-the-art VQA models?"
] | task461-98c403a1c3f14489bdfa94d29b7da31b |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The dataset is composed of 90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations.
Output:
| [
"what is the size of this built corpus?"
] | task461-a244a535fc9c4e60ae7b886af490fdad |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The selection model selects the best answer from the set $\lbrace a_i\rbrace _{i=1}^N$ observed during the interaction by predicting the difference of the F1 score to the average F1 of all variants. We train another neural network to pick the best answer from the candidates. We frame the task as binary classification, distinguishing between above and below average performance. In training, we compute the F1 score of the answer for every instance. If the rewrite produces an answer with an F1 score greater than the average score of the other rewrites the instance is assigned a positive label.
Output:
| [
"how are multiple answers from multiple reformulated questions aggregated?"
] | task461-1a095a74e8574759af437ae0079c4728 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Sentence classification combines natural language processing (NLP) with machine learning to identify trends in sentence structure, BIBREF14 , BIBREF15 . Each tweet is converted to a numeric word vector in order to identify distinguishing features by training an NLP classifier on a validated set of relevant tweets. The classifier acts as a tool to sift through ads, news, and comments not related to patients. Our scheme combines a logistic regression classifier, BIBREF16 , with a Convolutional Neural Network (CNN), BIBREF17 , BIBREF18 , to identify self-reported diagnostic tweets.
It is important to be wary of automated accounts (e.g. bots, spam) whose large output of tweets pollute relevant organic content, BIBREF19 , and can distort sentiment analyses, BIBREF20 . Prior to applying sentence classification, we removed tweets containing hyperlinks to remove automated content (some organic content is necessarily lost with this strict constraint). Twitter allows users to spread content from other users via `retweets'. We also removed these posts prior to classification to isolate tweets authored by patients. We also accounted for non-relevant astrological content by removing all tweets containing any of the following horoscope indicators: `astrology',`zodiac',`astronomy',`horoscope',`aquarius',`pisces',`aries',`taurus',`leo',`virgo',`libra', and `scorpio'. We preprocessed tweets by lowercasing and removing punctuation. We also only analyzed tweets for which Twitter had identified `en' for the language English.
Output:
| [
"What machine learning and NLP methods were used to sift tweets relevant to breast cancer experiences?"
] | task461-7ab9344796e142548de0063031f6a143 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We used the annotated dataset reported by degen2015investigating, a dataset of the utterances from the Switchboard corpus of telephone dialogues BIBREF21 that contain the word some. The dataset consists of 1,362 unique utterances with a noun phrase containing some (some-NP). For each example with a some-NP, degen2015investigating collected inference strength ratings from at least 10 participants recruited on Amazon's Mechanical Turk. Participants saw both the target utterance and ten utterances from the preceding discourse context. They then rated the similarity between the original utterance like (UNKREF8) and an utterance in which some was replaced with some, but not all like (UNKREF9), on a 7-point Likert scale with endpoints labeled “very different meaning” (1) and “same meaning” (7). Low similarity ratings thus indicate low inference strength, and high similarity ratings indicate high inference strength.
Output:
| [
"Which dataset do they use?"
] | task461-eada2444a8f149188990b6739fe5c02f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: A group of 50 native people who were well-versed in both English and Tamil languages acted as annotators for the evaluation. A collection of samples of about 100 sentences were taken from the test set results for comparison. This set included a randomized selection of the translation results to ensure the objectivity of evaluation. Fluency and adequacy results for the RNNMorph results are tabulated. Adequacy rating was calculated on a 5-point scale of how much of the meaning is conveyed by the translation (All, Most, Much, Little, None). The fluency rating was calculated based on grammatical correctness on a 5-point scale of (Flawless, Good, Non-native, Disfluent, Incomprehensive). For the comparison process, the RNNMorph and the RNNSearch + Word2Vec models’ sentence level translations were individually ranked between each other, permitting the two translations to have ties in the ranking.
Output:
| [
"How were the human judgements assembled?"
] | task461-5342242affbd42c99dea58c7fd571211 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Pretraining. We collect three years of online news articles from June 2016 to June 2019. We filter out articles overlapping with the evaluation data on media domain and time range. We then conduct several data cleaning strategies.
Output:
| [
"What did they pretrain the model on?"
] | task461-5e2b5408415f491aadbc5c0251064f54 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our features consisted of direct similarity over ConceptNet Numberbatch embeddings, the relationships inferred over ConceptNet by SME, features that compose ConceptNet with other resources (WordNet and Wikipedia), and a purely corpus-based feature that looks up two-word phrases in the Google Books dataset.
Output:
| [
"What features did they train on?"
] | task461-f06d77fc013145deb71e2e3a658cc39c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma. As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA.
Output:
| [
"What is the state of the art?"
] | task461-dd51fd4046324566b9ffdb02adb4fe01 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For annotation purposes, we created snippets of conversations as the ones shown in Example 1 and Example 2 consisting of the parent of the suspected trolling event, the suspected trolling event comment, and all of the direct responses to the suspected trolling event. However, the trade off is that snippets like this allow us to make use of Amazon Mechanical Turk (AMT) to have the dataset annotated, because it is not a big burden for a “turker” to work on an individual snippet in exchange for a small pay, and expedites the annotation process by distributing it over dozens of people. Specifically, for each snippet, we requested three annotators to label the four aspects previously described.
Output:
| [
"how was annotation done?"
] | task461-83f05edbc72b424883f4c19c2957194c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Motivated by phrase based SMT, we retrieve neighbors which have high local, sub-sentence level overlap with the source sentence. We adapt our approach to retrieve n-grams instead of sentences. We represent every sentence by their reduced n-gram set. For every n-gram in INLINEFORM0 , we find the closest n-gram in the training set using the IDF similarity defined above.
Output:
| [
"Where do they retrieve neighbor n-grams from in their approach?"
] | task461-38c848d35c3a4310be499be11d91fc19 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We compare our method to the following baselines: (1) Single-task CNN: training a CNN model for each task individually; (2) Single-task FastText: training one FastText model BIBREF23 with fixed embeddings for each individual task; (3) Fine-tuned the holistic MTL-CNN: a standard transfer-learning approach, which trains one MTL-CNN model on all the training tasks offline, then fine-tunes the classifier layer (i.e. $\mathrm {M}^{(cls)}$ Figure 1 (a)) on each target task; (4) Matching Network: a metric-learning based few-shot learning model trained on all training tasks; (5) Prototypical Network: a variation of matching network with different prediction function as Eq. 9 ; (6) Convex combining all single-task models: training one CNN classifier on each meta-training task individually and taking the encoder, then for each target task training a linear combination of all the above single-task encoders with Eq. ( 24 ).
Output:
| [
"Do they compare with the MAML algorithm?"
] | task461-0d949da17fac47519f01ad7d875038a1 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We segment a hashtag into meaningful English phrases. In order to achieve this, we use a dictionary of English words. The sentiment of url: Since almost all the articles are written in well-formatted english, we analyze the sentiment of the first paragraph of the article using Standford Sentiment Analysis tool BIBREF4 .
Output:
| [
"Do the authors report only on English language data?"
] | task461-15fba80e7a5b4ce59fb25fb8ba8a3a89 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 . A natural language inference task consists of two sentences; a premise and a hypothesis which are either contradictions, entailments or neutral. Learning a NLI task takes a certain nuanced understanding of language. Therefore it is of interest whether or not UG-WGAN captures the necessary linguistic features.
Output:
| [
"Did they experiment with tasks other than word problems in math?"
] | task461-57e4146f1b054581b9e4f4beac772251 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For the MT task, we use the WMT 2014 En $\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously.
Output:
| [
"What data were they used to train the multilingual encoder?"
] | task461-15e87633b5f44e09ae4a09dfc29a9bae |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: ACL Anthology Reference Corpus contains the canonical 10,921 computational linguistics papers, from which I have generated 622,144 sentences after filtering out sentences with lower quality.
Output:
| [
"What dataset is used?"
] | task461-894d309f0b144db582a59c1bf11ad910 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The two main data sets that we will use to evaluate our architecture are the 2014 i2b2 de-identification challenge data set BIBREF2 and the nursing notes corpus BIBREF3 .
Output:
| [
"Which two datasets is the system tested on?"
] | task461-404bc7c6da4b445b985c96f1834e7f3c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Specifically, we firstly extract multiple relations with an off-the-shelf Open Information Extraction (OpenIE) toolbox BIBREF7, then we select the relation that is most relevant to the answer with carefully designed heuristic rules.
Output:
| [
"How they extract \"structured answer-relevant relation\"?"
] | task461-8f5798b0478e498ba80966898823c4f3 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Using the distribution of the individual words in a category, we can compile distributions for the entire category, and therefore generate maps for these word categories.
Output:
| [
"Do they build a model to automatically detect demographic, lingustic or psycological dimensons of people?"
] | task461-c2085ff741114445afe4185daad31e53 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: . Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).
Output:
| [
"Which languages are explored in this paper?"
] | task461-c001f76c4beb4089ae4cff3d3d8e91ab |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 .
Output:
| [
"How is the data annotated?"
] | task461-d5410dc68e3e4b24832beb77f39c4f64 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: System description ::: Architecture The components involved in the process are: QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer. QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. Bot: Calls the WebApp with the User's query to get results.
Output:
| [
"What components is the QnAMaker composed of?"
] | task461-6210b50e6b9b47958bbc050303ff2f93 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We only used models of dimension 300 and, for Word2Vec, Wang2Vec, and FastText, skip-gram architectured models.
Output:
| [
"what word embedding techniques did they experiment with?"
] | task461-3c4af254f17240bfb84ebb0c2c5fa448 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For the datasets, we use Karpathy and Fei-Fei's split for MS-COCO dataset BIBREF10 .
Output:
| [
"What dataset/corpus is this work evaluated over?"
] | task461-780c7cc33f05419faf7baf542b90a9d5 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL.
Output:
| [
"Do the authors suggest why syntactic parsing is so important for semantic role labelling for interlanguages?"
] | task461-2a5f1b9b39db44f0b7f95c70749c8c13 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Table TABREF14 shows the results of our main experiments on the 2016 and 2018 test sets for French and German. We use Meteor BIBREF31 as the main metric, as in the WMT tasks BIBREF25 . We compare our transformer baseline to transformer models enriched with image information, as well as to the deliberation models, with or without image information.
We first note that our multimodal models achieve the state of the art performance for transformer networks (constrained models) on the English-German dataset, as compared to BIBREF30 . Second, our deliberation models lead to significant improvements over this baseline across test sets (average INLINEFORM0 , INLINEFORM1 ).
Output:
| [
"What dataset does this approach achieve state of the art results on?"
] | task461-0065dbf08ee840498dd27544c1eeba75 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We use TB-Dense and MATRES in our experiments and briefly summarize the data statistics in Table TABREF33.
Output:
| [
"What datasets were used for this work?"
] | task461-fb52c17183c1464bb0d79665cf76fe8d |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We use a default train-validation-test split of 70-15-15 for each dataset, and use all 4 datasets (BF, BA, SFU and Sherlock).
Output:
| [
"Which multiple datasets did they train on during joint training?"
] | task461-2e676d189de64faeb37df97d5e8f70c1 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ). In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters.
Output:
| [
"Which methods do the authors use to reach the conclusion that LSTMs and HMMs learn complementary information?"
] | task461-e5e7d800db084958b1003fac1fbe1f52 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We compile three Chinese text corpora from online data for three domains, namely, “hotel", “mobile phone (mobile)", and “travel". All texts are about user reviews.
Output:
| [
"What are the sources of the data?"
] | task461-dbf8813a3c964d86987681fb626e18af |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments.
Output:
| [
"How was the previous dataset annotated?"
] | task461-aba770e4e7fd4ad0bc2af8401b6c3c02 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general.
Recommendations ::: (R1) Choose professional translators as raters.
In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs.
Recommendations ::: (R2) Evaluate documents, not sentences.
When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4).
Recommendations ::: (R3) Evaluate fluency in addition to adequacy.
Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality.
Recommendations ::: (R4) Do not heavily edit reference translations for fluency.
In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30).
Recommendations ::: (R5) Use original source texts.
Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT.
Output:
| [
"What recommendations do they offer?"
] | task461-7b839514dcbb4ba0a7d6b560e93eec67 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: This action space is of the order of $A=\mathcal {O}(|V| \times |O|^2)$ where $V$ is the number of action verbs, and $O$ is the number of distinct objects in the world that the agent can interact with. As this is too large a space for a RL agent to effectively explore, the knowledge graph is used to prune this space by ranking actions based on their presence in the current knowledge graph and the relations between the objects in the graph as in BIBREF7
Output:
| [
"How is the domain knowledge transfer represented as knowledge graph?"
] | task461-74f8eefad53248428da7c4f677830d6c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: $Coherence@k$ has been shown to have high correlation with human interpretability of topics learned via various topic modeling methods BIBREF7 . Hence, we can expect interpretable embeddings by maximizing it. For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests.
Output:
| [
"How do they evaluate interpretability?"
] | task461-f0cbc70b3d6a4aada7165279a716a6be |