input
stringlengths 3.23k
13.3k
| output
sequencelengths 1
3
| id
stringlengths 40
40
|
---|---|---|
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7.
Output:
| [
"What are state-of-the art models for this task?"
] | task461-e79ad2341e1b4f74bebaa53f53b855ca |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The De7 database used for this study was designed by Marc Schroeder as one of the first attempts of creating diphone databases for expressive speech synthesis BIBREF2. The database contains three voice qualities (modal, soft and loud) uttered by a German female speaker, with about 50 minutes of speech available for each voice quality.
Output:
| [
"What large corpus is used for experiments?"
] | task461-3f6caaac6b2e4db2b3c209dda79a6a7b |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We apply our adaptively sparse Transformers on four machine translation tasks. IWSLT 2017 German $\rightarrow $ English BIBREF27: 200K sentence pairs.
KFTT Japanese $\rightarrow $ English BIBREF28: 300K sentence pairs.
WMT 2016 Romanian $\rightarrow $ English BIBREF29: 600K sentence pairs.
WMT 2014 English $\rightarrow $ German BIBREF30: 4.5M sentence pairs.
Output:
| [
"What tasks are used for evaluation?"
] | task461-bb6fa961d7894c41a53d12ffbfca71a6 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Unlike the SQuAD dataset, which only has one passage given a question, there are several related passages for each question in the MS-MARCO dataset. In addition to annotating the answer, MS-MARCO also annotates which passage is correct.
Output:
| [
"Why MS-MARCO is different from SQuAD?"
] | task461-f011eaf97d344c31aba7e53c70609ba5 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We collect HLA data from TV Tropes BIBREF3, a knowledge-based website dedicated to pop culture, containing information on a plethora of characters from a variety of sources. Similar to Wikipedia, its content is provided and edited collaboratively by a massive user-base. These attributes are determined by human viewers and their impressions of the characters, and are correlated with human-like characteristics. We believe that TV Tropes is better for our purpose of fictional character modeling than data sources used in works such as BIBREF25 shuster2019engaging because TV Tropes' content providers are rewarded for correctly providing content through community acknowledgement.
Output:
| [
"How does dataset model character's profiles?"
] | task461-01287f4f3d1c41bcb699d341818497e2 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.
Output:
| [
"What non-contextual properties do they refer to?"
] | task461-93ac9c31bf4647b097a903a9ebd44987 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We construct the embeddings for each document using BERT models finetuned on MLDoc. We mean-pool each document embedding to create a single vector per document. We then calculate the cosine similarity between the embeddings for the English document and its translation. In Table TABREF21 , we observe that the median cosine similarity increases dramatically with adversarial training, which suggests that the embeddings became more language-independent.
Output:
| [
"How do they quantify alignment between the embeddings of a document and its translation?"
] | task461-b3913aee51834ad7b3b1ec00c125b8b2 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Probing into the assorted empirical results may help us discover some interesting phenomenons:
The advantage of pre-training gradually diminishes with the increase of labeled data BIBREF14, BIBREF17, BIBREF18.
Fixed representations yield better results than fine-tuning in some cases BIBREF24.
Overall, pre-training the Seq2Seq encoder outperforms pre-training the decoder BIBREF24, BIBREF17, BIBREF15, BIBREF16.
Output:
| [
"What experimental phenomena are presented?"
] | task461-258d5f773bc449798aa985e87c31771d |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration.
Output:
| [
"How many label options are there in the multi-label task?"
] | task461-2b4ecfb7358c4e7ab9da61d8639e6c64 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet.
Output:
| [
"Do they use a crowdsourcing platform for annotation?"
] | task461-51b1904c6b924119add9309595e26e91 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22. Syntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax.
Output:
| [
"What semantic and syntactic tasks are used as probes?"
] | task461-7f6be152ac9b4f128580dd3f0527653c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions.
Output:
| [
"What latent variables are modeled in the PCFG?"
] | task461-50a6d8f78bfe4ffab0f6825812def638 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We draw on a recently released corpus of state speeches delivered during the annual UN General Debate that provides the first dataset of textual output from states that is recorded at regular time-series intervals and includes a sample of all countries that deliver speeches BIBREF11 .
Output:
| [
"Which dataset do they use?"
] | task461-c7ff20b1e09e4fb98d4b3fec48ec8695 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences.
Output:
| [
"What baselines did they consider?"
] | task461-2afae3979d6f45a7a855959ee4713f7c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes BIBREF44 and protein-protein interactions (PPI) BIBREF56.
Output:
| [
"Is there any concrete example in the paper that shows that this approach had huge impact on drug discovery?"
] | task461-5d9e4ed232d2454a8ef59c71cc0bfb65 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”.
Output:
| [
"What measures were used for human evaluation?"
] | task461-3aeb2b113c954240ace507edaf952068 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We optimized our single-task baseline to get a strong baseline in order to exclude better results in multi-task learning in comparison to single-task learning only because of these two following points: network parameters suit the multi-task learning approach better and a better randomness while training in the multi-task learning. To exclude the first point, we tested different hyperparameters for the single-task baseline. We tested all the combinations of the following hyperparameter values: 256, 512, or 1024 as the sizes for the hidden states of the LSTMs, 256, 512, or 1024 as word embedding sizes, and a dropout of 30 %, 40 %, or 50 %. We used subword units generated by byte-pair encoding (BPE) BIBREF16 as inputs for our model. To avoid bad subword generation for the synthetic datasets, in addition to the training dataset, we considered the validation and test dataset for the generating of the BPE merge operations list. We trained the configurations for 14 epochs and trained every configuration three times. We chose the training with the best quality with regard to the validation F1-score to exclude disadvantages of a bad randomness. We got the best quality with regard to the F1-score with 256 as the size of the hidden states of the LSTMs, 1024 as word embedding size, and a dropout of 30 %. For the batch size, we used 64.
Output:
| [
"What are the strong baselines you have?"
] | task461-e9cb8d363bc74bb8920fb9ebfe4c73cc |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC).
Output:
| [
"Which types of named entities do they recognize?",
"How large is their MNER SnapCaptions dataset?"
] | task461-173a62a957f649ff8e4c6bb8ae564705 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: N-grams, POS, Hierarchical features: A baseline bag-of-words model incorporating both tagged and untagged unigrams and bigams. CNN: Kim BIBREF28 demonstrated near state-of-the-art performance on a number of sentence classification tasks (including TREC question classification) by using pre-trained word embeddings BIBREF40 as feature extractors in a CNN model.
Output:
| [
"What previous methods is their model compared to?"
] | task461-37ba1782ade34905b1d2c1a699fcf5b8 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: 29,794 The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (“FA”), Good Article (“GA”), B-class Article (“B”), C-class Article (“C”), Start Article (“Start”), and Stub Article (“Stub”). We randomly sampled 5,000 articles from each quality class and removed all redirect pages, resulting in a dataset of 29,794 articles.
Output:
| [
"How large is their data set?"
] | task461-79462b354fa1425c993d6c9a53060c0a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: But we also showed that in some cases the saliency maps seem to not capture the important input features. The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates
Output:
| [
"Is the explanation from saliency map correct?"
] | task461-2c00645345e64b0fbffafd63d83833d9 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In order to fully leverage gloss information, we propose GlossBERT to construct context-gloss pairs from all possible senses of the target word in WordNet, thus treating WSD task as a sentence-pair classification problem.
Output:
| [
"Do they incoprorate WordNet into the model?"
] | task461-6f45b72ed6764c10acc33ff4d26660e9 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The corpus includes 2000 sentences.
Output:
| [
"How long is the dataset?"
] | task461-b606fa5378a74dff92ee53c1f2818e89 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The test-set accuracies obtained by different learning methods, including the current state-of-the-art results, are presented in Table TABREF11 .
Output:
| [
"what models did they compare to?"
] | task461-584dad0c05fa468d950ea51152d0dbe4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The word intrusion test is expensive to apply since it requires manual evaluations by human observers separately for each embedding dimension. Furthermore, the word intrusion test does not quantify the interpretability levels of the embedding dimensions, instead it yields a binary decision as to whether a dimension is interpretable or not. However, using continuous values is more adequate than making binary evaluations since interpretability levels may vary gradually across dimensions.
Output:
| [
"What advantages does their proposed method of quantifying interpretability have over the human-in-the-loop evaluation they compare to?"
] | task461-058d2f4670b64fff895635bfde7bc0fa |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4.
Output:
| [
"What are method's improvements of F1 w.r.t. baseline BERT tagger for Chinese POS datasets?"
] | task461-ee12e4f225f1443bb23d0cfbc404de7c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts.
Output:
| [
"How does Frege's holistic and functional approach to meaning relates to general distributional hypothesis?"
] | task461-8114e0ba70114fac91b0329bc0a9a1dc |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: How the music taste of the audience of popular music changed in the last century? The trend lines of the MUSIC model features, reported in figure FIGREF12, reveal that audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious. In other words, the audiences of popular music are getting more demanding as the quality and variety of the music products increases.
Output:
| [
"What trends are found in musical preferences?"
] | task461-9ed71abe8de04cad8929ea93ccafb8b4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word. We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word. Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation. To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity. The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor Surrounding Uniformity (SU) can be expressed as follows: $SU(\vec{w}) = \frac{|\vec{s}(\vec{w})|}{|\vec{w}| + \sum _{i}^{N}|\vec{a_i}(\vec{w})|}$
where $\vec{s}(\vec{w}) = \vec{w} + \sum _{i}^{N} \vec{a_i}(\vec{w}).$
Output:
| [
"How is the fluctuation in the sense of the word and its neighbors measured?"
] | task461-674f2fe3a9484b5a9314c95e90375cb4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The tweetLID workshop shared task requires systems to identify the language of tweets written in Spanish (es), Portuguese (pt), Catalan (ca), English (en), Galician (gl) and Basque (eu).
Output:
| [
"What shared task does this system achieve SOTA in?"
] | task461-7479887c02334edc9df86d781eb35106 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based.
Output:
| [
"What are the core best practices of structured content analysis?"
] | task461-e62237ede1ae4376a55e8db02a226b7f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We have implemented the following interfaces for Macaw:
[leftmargin=*]
File IO: This interface is designed for experimental purposes, such as evaluating the performance of a conversational search technique on a dataset with multiple queries. This is not an interactive interface.
Standard IO: This interactive command line interface is designed for development purposes to interact with the system, see the logs, and debug or improve the system.
Telegram: This interactive interface is designed for interaction with real users (see FIGREF4). Telegram is a popular instant messaging service whose client-side code is open-source. We have implemented a Telegram bot that can be used with different devices (personal computers, tablets, and mobile phones) and different operating systems (Android, iOS, Linux, Mac OS, and Windows). This interface allows multi-modal interactions (text, speech, click, image). It can be also used for speech-only interactions. For speech recognition and generation, Macaw relies on online APIs, e.g., the services provided by Google Cloud and Microsoft Azure. In addition, there exist multiple popular groups and channels in Telegram, which allows further integration of social networks with conversational systems. For example, see the Naseri and Zamani's study on news popularity in Telegram BIBREF12.
Output:
| [
"What interface does Macaw currently have?"
] | task461-509f074899b64abdab37b04e459b2412 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Guidelines for Evaluating Faithfulness
We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.
Guidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.
Conflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.
We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.
We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.
Guidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.
Inherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.
End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness.
Output:
| [
"Which are key points in guidelines for faithfulness evaluation?"
] | task461-7a348ec788c34ae79c4d219bddbae5f8 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: I first measured the effectiveness of the embeddings at the skip-gram's training task, predicting context words INLINEFORM0 given input words INLINEFORM1 . This task measures the methods' performance for predictive language modeling. I used four datasets of sociopolitical, scientific, and literary interest: the corpus of NIPS articles from 1987 – 1999 ( INLINEFORM2 million), the U.S. presidential state of the Union addresses from 1790 – 2015 ( INLINEFORM3 ), the complete works of Shakespeare ( INLINEFORM4 ; this version did not contain the Sonnets), and the writings of black scholar and activist W.E.B. Du Bois, as digitized by Project Gutenberg ( INLINEFORM5 ). For each dataset, I held out 10,000 INLINEFORM6 pairs uniformly at random, where INLINEFORM7 , and aimed to predict INLINEFORM8 given INLINEFORM9 (and optionally, INLINEFORM10 ). Since there are a large number of classes, I treat this as a ranking problem, and report the mean reciprocal rank. The experiments were repeated and averaged over 5 train/test splits.
Output:
| [
"What is MRR?"
] | task461-901d935633c54e82aceacf5b7b849451 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We examine a deep-learning approach to sequence labeling using a vanilla recurrent neural network (RNN) with word embeddings, as well as a joint inference, structured prediction approach using Stanford's knowledge base construction framework DeepDive BIBREF1 .
Output:
| [
"Which structured prediction approach do they adopt for temporal entity extraction?"
] | task461-6d63d1056ca74e38858f6e54a0c94502 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8.
Output:
| [
"How much training data from the non-English language is used by the system?"
] | task461-daa1187719c54e57a062d9de06c2717e |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Originally, the seed dictionaries typically spanned several thousand word pairs BIBREF15 , BIBREF18 , BIBREF19 , but more recent work has shown that CLWEs can be induced with even weaker supervision from small dictionaries spanning several hundred pairs BIBREF20 , identical strings BIBREF21 , or even only shared numerals BIBREF22 .
Output:
| [
"How are seed dictionaries obtained by fully unsupervised methods?"
] | task461-95ed5b6edd7343f59592f64ebec9d86e |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this work, we introduce a new logical inference engine called MonaLog, which is based on natural logic and work on monotonicity stemming from vanBenthemEssays86. Since our logic operates over surface forms, it is straightforward to hybridize our models. We investigate using MonaLog in combination with the language model BERT BIBREF20, including for compositional data augmentation, i.e, re-generating entailed versions of examples in our training sets. We perform two experiments to test MonaLog. We first use MonaLog to solve the problems in a commonly used natural language inference dataset, SICK BIBREF1, comparing our results with previous systems. Second, we test the quality of the data generated by MonaLog. To do this, we generate more training data (sentence pairs) from the SICK training data using our system, and performe fine-tuning on BERT BIBREF20, a language model based on the transformer architecture BIBREF23, with the expanded dataset.
Output:
| [
"How do they combine MonaLog with BERT?"
] | task461-b0410519386949fc83bb9915190ecfca |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: . In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5.
Output:
| [
"How is state to learn and complete tasks represented via natural language?"
] | task461-b39f05e2b39c41e99140b48276692d5d |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295.
Output:
| [
"How big is the dataset?"
] | task461-b71cc731e3af44998a3a693019ad6883 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In total around 500 different workers were involved in the annotation. about 500
Output:
| [
"How many people participated in their evaluation study of table-to-text models?"
] | task461-a0979dcf631c49b5951b37551a0fbe39 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: During each evaluation, target terms are masked, predicted, and then compared to the masked (known) value.
Output:
| [
"Are the Transformers masked?"
] | task461-818cf898d2fb476baf25ca5b774dac75 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our annotation scheme introduces opportunities for the educational community to conduct further research on the relationship between features of student talk, student learning, and discussion quality. Once automated classifiers are developed, such relations between talk and learning can be examined at scale. Also, automatic labeling via a standard coding scheme can support the generalization of findings across studies, and potentially lead to automated tools for teachers and students. The development of an annotation scheme explicitly designed for this problem is the first step towards collecting and annotating corpora that can be used by the NLP community to advance the field in this particular area.
Output:
| [
"what opportunities are highlighted?"
] | task461-251ea67c23824aad984f9e64198450dd |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In order to understand what may be happening in the model, we used the body and punchline only datasets to see what part of the joke was most important for humor. We found that all of the models, including humans, relied more on the punchline of the joke in their predictions (Table 2). Thus, it seems that although both parts of the joke are needed for it to be humorous, the punchline carries higher weight than the body. We hypothesize that this is due to the variations found in the different joke bodies: some take paragraphs to set up the joke, while others are less than a sentence.
Output:
| [
"Which part of the joke is more important in humor?"
] | task461-179cc5abe834470c9baad25c9e0fcca9 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For sentiment classification, we systematically study the effect of character-level adversarial attacks on two architectures and four different input formats. We also consider the task of paraphrase detection.
Output:
| [
"What end tasks do they evaluate on?"
] | task461-96f234e2d35142aab578b07bb3899dbe |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges).
Output:
| [
"Did they use a relation extraction method to construct the edges in the graph?"
] | task461-eeeeb938de424ac7a28af926ca1926aa |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Recently, in the WMT 2018 APE shared task, several adaptations of the transformer architecture have been presented for multi-source APE. pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders. They introduce an additional joint encoder that attends over a combination of the two encoded sequences from $mt$ and $src$. tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics. shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. On the decoder side, they add two additional multi-head attention layers, one for $src \rightarrow mt$ and another for $src \rightarrow pe$. Thereafter another multi-head attention between the output of those attention layers helps the decoder to capture common words in $mt$ which should remain in $pe$. The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \rightarrow pe$ above the previous cross-attention for $mt \rightarrow pe$. Comparing shin-lee:2018:WMT's approach with the winner system, there are only two differences in the architecture: (i) the cross-attention order of $src \rightarrow mt$ and $src \rightarrow pe$ in the decoder, and (ii) $wmt18^{smt}_{best}$ additionally shares parameters between two encoders. Recently, in the WMT 2018 APE shared task, several adaptations of the transformer architecture have been presented for multi-source APE. pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders. They introduce an additional joint encoder that attends over a combination of the two encoded sequences from $mt$ and $src$. tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics. shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. On the decoder side, they add two additional multi-head attention layers, one for $src \rightarrow mt$ and another for $src \rightarrow pe$. Thereafter another multi-head attention between the output of those attention layers helps the decoder to capture common words in $mt$ which should remain in $pe$. The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \rightarrow pe$ above the previous cross-attention for $mt \rightarrow pe$. Comparing shin-lee:2018:WMT's approach with the winner system, there are only two differences in the architecture: (i) the cross-attention order of $src \rightarrow mt$ and $src \rightarrow pe$ in the decoder, and (ii) $wmt18^{smt}_{best}$ additionally shares parameters between two encoders.
Output:
| [
"What was previous state of the art model for automatic post editing?"
] | task461-6172fc9a4af443bbb590b328d11f854c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Open-domain chatbots are more generic dialogue systems. An example is the Poly-encoder from BIBREF7 humeau2019real. It outperforms the Bi-encoder BIBREF8, BIBREF9 and matches the performance of the Cross-encoder BIBREF10, BIBREF11 while maintaining reasonable computation time. It performs strongly on downstream language understanding tasks involving pairwise comparisons, and demonstrates state-of-the-art results on the ConvAI2 challenge BIBREF12. Feed Yourself BIBREF13 is an open-domain dialogue agent with a self-feeding model. When the conversation goes well, the dialogue becomes part of the training data, and when the conversation does not, the agent asks for feedback. Lastly, Kvmemnn BIBREF14 is a key-value memory network with a knowledge base that uses a key-value retrieval mechanism to train over multiple domains simultaneously. We use all three of these models as baselines for comparison. We compare against four dialogue system baselines: Kvmemnn, Feed Yourself, Poly-encoder, and a BERT bi-ranker baseline trained on the Persona-Chat dataset using the same training hyperparameters (including learning rate scheduler and length capping settings) described in Section SECREF20.
Output:
| [
"What baseline models are used?"
] | task461-43f677d413ec4ad7804381e2f5140530 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: To be able to employ the trained model, test sets are first translated to English via machine translation and then inference takes place. For machine translation, Google translation API is used.
Output:
| [
"how did the authors translate the reviews to other languages?"
] | task461-5962354cdc1b4f4cbefeabd5601b81fb |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our visual model is based on fine-tuning an Inception V3 model BIBREF1 over visual renderings of documents, while our textual model is based on a hierarchical biLSTM. We further combine the two into a joint model. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document.
Output:
| [
"What kind of model do they use?"
] | task461-5168f4de7d534626b1af0ccb92495162 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\textbf {All India Radio}$ news channel. We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances.
Output:
| [
"How was the audio data gathered?"
] | task461-c621780ba0eb4699b08593ca03ed40e4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We carried out two human evaluations using Mechanical Turk to compare the performance of our model and the baseline.
Output:
| [
"Is there any human evaluation involved in evaluating this famework?"
] | task461-e1ebdeb206504a55b0bf5ef3b4b723c4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Using our annotation software, we automatically extracted landmarks of 2000 images from the UOttawa database BIBREF14 had been annotated for image segmentation tasks.
Output:
| [
"How big are datasets used in experiments?"
] | task461-260ad09daca2465f84bb99347b7e018c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this case, the decoding function is a linear projection, which is $= f_{\text{de}}()=+ $ , where $\in ^{d_\times d_}$ is a trainable weight matrix and $\in ^{d_\times 1}$ is the bias term. A family of bijective transformation was designed in NICE BIBREF17 , and the simplest continuous bijective function $f:^D\rightarrow ^D$ and its inverse $f^{-1}$ is defined as:
$$h: \hspace{14.22636pt} _1 &= _1, & _2 &= _2+m(_1) \nonumber \\ h^{-1}: \hspace{14.22636pt} _1 &= _1, & _2 &= _2-m(_1) \nonumber $$ (Eq. 15)
where $_1$ is a $d$ -dimensional partition of the input $\in ^D$ , and $m:^d\rightarrow ^{D-d}$ is an arbitrary continuous function, which could be a trainable multi-layer feedforward neural network with non-linear activation functions. It is named as an `additive coupling layer' BIBREF17 , which has unit Jacobian determinant. To allow the learning system to explore more powerful transformation, we follow the design of the `affine coupling layer' BIBREF24 :
$$h: \hspace{5.69046pt} _1 &= _1, & _2 &= _2 \odot \text{exp}(s(_1)) + t(_1) \nonumber \\ h^{-1}: \hspace{5.69046pt} _1 &= _1, & _2 &= (_2-t(_1)) \odot \text{exp}(-s(_1)) \nonumber $$ (Eq. 16)
where $s:^d\rightarrow ^{D-d}$ and $t:^d\rightarrow ^{D-d}$ are both neural networks with linear output units.
The requirement of the continuous bijective transformation is that, the dimensionality of the input $$ and the output $$ need to match exactly.
Output:
| [
"What are the two decoding functions?"
] | task461-450337e2534c4c5bb8712344e6d83f82 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Word2vec representation is far better, advanced and a recent technique which functions by mapping words to a 300 dimensional vector representations.
Output:
| [
"What is the dimension of the embeddings?"
] | task461-f0f946f6c7a244d9a1289fab0354be93 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The unlabeled set of development data was used in the training of both the UBM and the i-vector extractor.
Output:
| [
"Do they single out a validation set from the fixed SRE training set?"
] | task461-82b751cdda0a476c82cd6bbfa503d36f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators.
Output:
| [
"What is the new labeling strategy?"
] | task461-d83d3f17432e4207805698e9421e5f8a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: An overview of the provided dataset is given in Table TABREF16 . Notice that the available amount of data differs per language.
Output:
| [
"is the dataset balanced across the four languages?"
] | task461-e33d9f80df034d73bab0e59dab6a346a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For model distillation BIBREF6 , we extract sentences from Wikipedia in languages for which public multilingual is pretrained. For each sentence, we use the open-source BERT wordpiece tokenizer BIBREF4 , BIBREF1 and compute cross-entropy loss for each wordpiece: INLINEFORM0
where INLINEFORM0 is the cross-entropy function, INLINEFORM1 is the softmax function, INLINEFORM2 is the BERT model's logit of the current wordpiece, INLINEFORM3 is the small BERT model's logits and INLINEFORM4 is a temperature hyperparameter, explained in Section SECREF11 .
To train the distilled multilingual model mMiniBERT, we first use the distillation loss above to train the student from scratch using the teacher's logits on unlabeled data. Afterwards, we finetune the student model on the labeled data the teacher is trained on.
Output:
| [
"How do they compress the model?"
] | task461-e3c86bef9fad486f9122f6661fe1c2ed |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We make copies of the monolingual model for each language and add additional crosslingual latent variables (CLVs) to couple the monolingual models, capturing crosslingual semantic role patterns. Concretely, when training on parallel sentences, whenever the head words of the arguments are aligned, we add a CLV as a parent of the two corresponding role variables.
Output:
| [
"Do they add one latent variable for each language pair in their Bayesian model?"
] | task461-edab51fc2bbf4814bae3768d2d8128a4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable
Output:
| [
"What approach did previous models use for multi-span questions?"
] | task461-1d014828326d4bd386f99eb25888de62 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns.
Output:
| [
"Which model do they use to convert between masculine-inflected and feminine-inflected sentences?"
] | task461-3b5e313993e945a08b2aeae2ec0fb73c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We evaluate the TN topic model quantitatively with standard topic model measures such as test-set perplexity, likelihood convergence and clustering measures. Qualitatively, we evaluate the model by visualizing the topic summaries, authors' topic distributions and by performing an automatic labeling task.
Output:
| [
"What are the measures of \"performance\" used in this paper?"
] | task461-ee5f122fe6294746b24a341d2f96bec0 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In recent years, researchers have proposed various filter based feature selection methods to raise the performance of document text classification BIBREF19. Furthermore, in order to rank the terms based on their discriminative power among the classes, we use filter based feature selection method named as Normalized Difference Measure (NDM)BIBREF5. Considering the features contour plot, Rehman et al. BIBREF5 suggested that all those features which exist in top left, and bottom right corners of the contour are extremely significant as compared to those features which exist around diagonals. State-of-the-art filter based feature selection algorithms such as ACC2 treat all those features in the same fashion which exist around the diagonals BIBREF5.
Output:
| [
"Is the filter based feature selection (FSE) a form of regularization?"
] | task461-11ad4da71ae34ed7ae1214169ca69417 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.
The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.
Output:
| [
"What is the previous work's model?"
] | task461-e5d6f22ba6d44570b6be62f699fc6d1f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this work, we use pretraining on the unlabeled data of each task and show that it can increase the performance of classification systems.
Output:
| [
"Where is MVCNN pertained?"
] | task461-b9781d50bee34fedaa6506bdfacbacb5 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: On the same architecture, our RCRN outperforms ablative baselines BiLSTM by INLINEFORM2 and 3L-BiLSTM by INLINEFORM3 on average across 16 datasets.
Output:
| [
"By how much do they outperform BiLSTMs in Sentiment Analysis?"
] | task461-ab31ab271f4f46b0a80849a41751ade0 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability.
Output:
| [
"How was the corpus obtained?"
] | task461-30a4c5deffa847c78ee0b2f56cb5baec |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:
where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities.
Output:
| [
"How does new evaluation metric considers critical informative entities?"
] | task461-48f418bbbaa14cd39b703cb4dcbd7365 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The BD-4SK-ASR Dataset ::: The Language Model
We created the language from the transcriptions. The model was created using CMUSphinx in which (fixed) discount mass is 0.5, and backoffs are computed using the ratio method. The model includes 283 unigrams, 5337 bigrams, and 6935 trigrams.
Output:
| [
"What are the results of the experiment?"
] | task461-bba90f3c2b8d459a8c863a48fe11e7fc |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The corpus is freely available at the following link
Output:
| [
"Is this dataset publicly available?"
] | task461-dc868a2e023c43989a3de75ebb3c8e57 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We quantify the efficiency-accuracy tradeoff compared to two rule-based baselines: Unif and Stopword.
Output:
| [
"What are the baselines used?"
] | task461-3a1964d31c014bb39af2f6520e7ed3cb |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: As a starting point, we used the DIP corpus BIBREF37 , a collection of 49 clusters of 100 web pages on educational topics (e.g. bullying, homeschooling, drugs) with a short description of each topic.
Output:
| [
"Which collections of web documents are included in the corpus?"
] | task461-c97f02fc073848db80bcf1d576ed667c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: To demonstrate the value of building a dedicated version of BERT for French, we first compare CamemBERT to the multilingual cased version of BERT (designated as mBERT).
Output:
| [
"Was CamemBERT compared against multilingual BERT on these tasks?"
] | task461-ed9d3a7d6577455ca278899477c046a1 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set. Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each. TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances. Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced.
Output:
| [
"What dataset/corpus is this evaluated over?"
] | task461-c524f158ed15446d82858548de303255 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application.
Output:
| [
"what is the source of the data?"
] | task461-97eb1e9a88ff4db8ae4afb157ba3d913 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Multimodal representation.
We combined textual and image representations in two simple ways. The first method is concatenation of the text and image representation (concat). Before concatenation we applied the L2 normalization to each of the modalities. The second method it to learn a common space for the two modalities before concatenation (project).
The projection of each modality learns a space of $d$-dimensions, so that $h_{1}, h_{2} \in \mathbb {R}^{d}$. Once the multimodal representation is produced ($h_{m}$) for the left and right pairs, vectors are directly plugged into the regression layers. Projections are learned end-to-end with the regression layers and the MSE as loss function.
Output:
| [
"What multimodal representations are used in the experiments?"
] | task461-bdb805bf343641ac99f2e1ac1f6676dd |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Thus we propose two kinds of relation-specific meta information: relation meta and gradient meta corresponding to afore mentioned two perspectives respectively. In our proposed framework MetaR, relation meta is the high-order representation of a relation connecting head and tail entities. Gradient meta is the loss gradient of relation meta which will be used to make a rapid update before transferring relation meta to incomplete triples during prediction.
Output:
| [
"What meta-information is being transferred?"
] | task461-a59e2547e8484e399ef02a028eb5b726 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\rightarrow $ ue). We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language.
Output:
| [
"Are agglutinative languages used in the prediction of both prefixing and suffixing languages?"
] | task461-fc0062a088084269bc51722cddf142fd |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances). To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features.
Output:
| [
"What is the baseline?"
] | task461-8b71ac1b481045bcb5739a78944a6e4a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters.
Output:
| [
"How is MVCNN compared to CNN?"
] | task461-54748967cb9445629d9a697ee5ac4079 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Using a set of semantically oriented tasks that require explicit semantic cross-lingual representations, we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks.
Output:
| [
"What challenges this work presents that must be solved to build better language-neutral representations?"
] | task461-908c12e55944490e86346d6bbf49df0f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In an offline step, we organize the content of each instance in a graph where each node represents a sentence from the passages and the question. Then, we add edges between nodes using the following topology:
we fully connect nodes that represent sentences from the same passage (dotted-black);
we fully connect nodes that represent the first sentence of each passage (dotted-red);
we add an edge between the question and every node for each passage (dotted-blue).
Output:
| [
"How are some nodes initially connected based on text structure?"
] | task461-095cfef92a1f469a863a168a43ff83aa |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Using sequence-to-sequence networks, the approach here is jointly training annotated utterance-level intents and slots/intent keywords by adding / tokens to the beginning/end of each utterance, with utterance-level intent-type as labels of such tokens. Our approach is an extension of BIBREF2 , in which only an term is added with intent-type tags associated to this sentence final token, both for LSTM and Bi-LSTM cases. However, we experimented with adding both and terms as Bi-LSTMs will be used for seq2seq learning, and we observed that slightly better results can be achieved by doing so. The idea behind is that, since this is a seq2seq learning problem, at the last time step (i.e., prediction at ) the reverse pass in Bi-LSTM would be incomplete (refer to Fig. FIGREF24 (a) to observe the last Bi-LSTM cell). Therefore, adding token and leveraging the backward LSTM output at first time step (i.e., prediction at ) would potentially help for joint seq2seq learning. An overall network architecture can be found in Fig. FIGREF30 for our joint models. We will report the experimental results on two variations (with and without intent keywords) as follows:
Joint-1: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots)
Joint-2: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots & intent keywords)
Output:
| [
"What is shared in the joint model?"
] | task461-f41c43a1de3447229e48d0bcef9184be |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The utterance is concatenated with a special symbol marking the end of the sequence. We initialize our word embeddings using 300-dimensional GloVe BIBREF30 and then fine-tune them during training.
Output:
| [
"Do they use pretrained word vectors for dialogue context embedding?"
] | task461-23f62c5f24034cea870e5c99ef8eb619 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
Output:
| [
"What is the architecture of the system?"
] | task461-287c4dcb40324cb08c0d299b30df6094 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We consider QA PGNet and Multi-decoder QA PGNet with lookup table embedding as baseline models and improve on the baselines with other variations described below.
Output:
| [
"What is the baseline?"
] | task461-9531bf4c6ef34fc6825c5748328638f1 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1.
Output:
| [
"How do they measure the quality of summaries?"
] | task461-7eaf8036a2fb4553865d680eaed354fe |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Given a set of evaluation documents, each having a known correct label from a closed set of labels (often referred to as the “gold-standard”), and a predicted label for each document from the same set, the document-level accuracy is the proportion of documents that are correctly labeled over the entire evaluation collection. This is the most frequently reported metric and conveys the same information as the error rate, which is simply the proportion of documents that are incorrectly labeled (i.e. INLINEFORM0 ). There are two distinct ways in which results are generally summarized per-language: (1) precision, in which documents are grouped according to their predicted language; and (2) recall, in which documents are grouped according to what language they are actually written in. It is also common practice to report an F-score INLINEFORM0 , which is the harmonic mean of precision and recall.
Output:
| [
"what evaluation methods are discussed?"
] | task461-b3dab065bd2448c5b413f24290b51807 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We used the following affective resources relying on different emotion models.
Emolex: it contains 14,182 words associated with eight primary emotion based on the Plutchik model BIBREF10 , BIBREF11 .
EmoSenticNet(EmoSN): it is an enriched version of SenticNet BIBREF12 including 13,189 words labeled by six Ekman's basic emotion BIBREF13 , BIBREF14 .
Dictionary of Affect in Language (DAL): includes 8,742 English words labeled by three scores representing three dimensions: Pleasantness, Activation and Imagery BIBREF15 .
Affective Norms for English Words (ANEW): consists of 1,034 English words BIBREF16 rated with ratings based on the Valence-Arousal-Dominance (VAD) model BIBREF17 .
Linguistic Inquiry and Word Count (LIWC): this psycholinguistic resource BIBREF18 includes 4,500 words distributed into 64 emotional categories including positive (PosEMO) and negative (NegEMO).
Output:
| [
"What affective-based features are used?"
] | task461-1308eded137c4c5f8d61b5dd4fdf901a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work.
Output:
| [
"What language is the Twitter content in?"
] | task461-f10d3466182244e9a5f5dd5436cb62eb |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: As shown in Table , the proposed PS-rnn-elmo shows a significant MAP performance improvement compared to the previous best model, CompClip-LM (0.696 to 0.734 absolute).
Output:
| [
"How much better performance of proposed model compared to answer-selection models?"
] | task461-24b8268678a44cb6bf9ace20b47b0179 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We implement three baseline models for comparison: Noun WordNet Semantic Text Exchange Model (NWN-STEM), General WordNet Semantic Text Exchange Model (GWN-STEM), and Word2Vec Semantic Text Exchange Model (W2V-STEM).
Output:
| [
"What are the baseline models mentioned in the paper?"
] | task461-23f1199d2b61412c971fa2311c52a58e |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 . For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples).
Output:
| [
"Which datasets are used in experiments?"
] | task461-3e7e3024c61c44b8b64cd7e01953f201 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data.
Output:
| [
"What model is used to achieve 5% improvement on F1 for identifying depressed individuals on Twitter?"
] | task461-fc1940a323604c419acf4644025b3096 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data).
Output:
| [
"What in-domain data is used to continue pre-training?"
] | task461-14ff4af27e8f4768ac465f0485b9004e |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The MP framework is based on the core idea of recursive neighborhood aggregation. That is, at every iteration, the representation of each vertex is updated based on messages received from its neighbors. All spectral GNNs can be described in terms of the MP framework.
GNNs have been applied with great success to bioinformatics and social network data, for node classification, link prediction, and graph classification. However, a few studies only have focused on the application of the MP framework to representation learning on text. This paper proposes one such application. The concept of message passing over graphs has been around for many years BIBREF0, BIBREF1, as well as that of graph neural networks
Output:
| [
"What is the message passing framework?"
] | task461-4c19ab83c84249bf9c318b557d154ea1 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Long-short Term Hybrid Memory and Multi-attention Block. Long-short Term Hybrid Memory (LSTHM) is an extension of the Long-short Term Memory (LSTM) by reformulating the memory component to carry hybrid information. In the LSTHM model, we seek to build a memory mechanism for each modality which in addition to storing view-specific dynamics, is also able to store the cross-view dynamics that are important for that modality. This allows the memory to function in a hybrid manner.
Output:
| [
"What is the difference between Long-short Term Hybrid Memory and LSTMs?"
] | task461-20ac112e8a694ef9a8ddfcc507c154d5 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: At the end of the annotation process, we have annotated 700 tweets, where 175 tweets are in favor of and 175 tweets are against Target-1, and similarly 175 tweets are in favor of and 175 are against Target-2.
Output:
| [
"How many tweets did they collect?"
] | task461-020cae3c71f348de842c51641ffdb534 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Plackett-Luce Model for SMT Reranking
After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree.
Output:
| [
"What experiments with large-scale features are performed?"
] | task461-7baf4301fe2644eb8d68357d54c631c3 |