input
stringlengths 3.23k
13.3k
| output
sequencelengths 1
3
| id
stringlengths 40
40
|
---|---|---|
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We use the public financial news dataset released by BIBREF4, which is crawled from Reuters and Bloomberg over the period from October 2006 to November 2013. We conduct our experiments on predicting the Standard & Poor’s 500 stock (S&P 500) index and its selected individual stocks, obtaining indices and prices from Yahoo Finance. Detailed statistics of the training, development and test sets are shown in Table TABREF8. We report the final results on test set after using development set to tune some hyper-parameters.
Output:
| [
"What is dataset used for news-driven stock movement prediction?"
] | task461-469e1b7c2b664508982ad91dbc0d063a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our framework includes three key modules: Retrieve, Fast Rerank, and BiSET. For each source article, Retrieve aims to return a few candidate templates from the training corpus. Then, the Fast Rerank module quickly identifies a best template from the candidates. Finally, BiSET mutually selects important information from the source article and the template to generate an enhanced article representation for summarization. This module starts with a standard information retrieval library to retrieve a small set of candidates for fine-grained filtering as cao2018retrieve. To do that, all non-alphabetic characters (e.g., dates) are removed to eliminate their influence on article matching. The retrieval process starts by querying the training corpus with a source article to find a few (5 to 30) related articles, the summaries of which will be treated as candidate templates. The above retrieval process is essentially based on superficial word matching and cannot measure the deep semantic relationship between two articles. Therefore, the Fast Rerank module is developed to identify a best template from the candidates based on their deep semantic relevance with the source article. We regard the candidate with highest relevance as the template. As illustrated in Figure FIGREF6 , this module consists of a Convolution Encoder Block, a Similarity Matrix and a Pooling Layer.
Output:
| [
"How are templates discovered from training data?"
] | task461-13da1ff6e5334e269c3a0ef7a3ca77ed |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Corpus utilizados ::: Corpus 5KL
Este corpus fue constituido con aproximadamente 5 000 documentos (en su mayor parte libros) en español. Los documentos originales, en formatos heterogéneos, fueron procesados para crear un único documento codificado en utf8. Las frases fueron segmentadas automáticamente, usando un programa en PERL 5.0 y expresiones regulares, para obtener una frase por línea.
Las características del corpus 5KL se encuentran en la Tabla TABREF4. Este corpus es empleado para el entrenamiento de los modelos de aprendizaje profundo (Deep Learning, Sección SECREF4).
El corpus literario 5KL posee la ventaja de ser muy extenso y adecuado para el aprendizaje automático. Tiene sin embargo, la desventaja de que no todas las frases son necesariamente “frases literarias”. Muchas de ellas son frases de lengua general: estas frases a menudo otorgan una fluidez a la lectura y proporcionan los enlaces necesarios a las ideas expresadas en las frases literarias.
Otra desventaja de este corpus es el ruido que contiene. El proceso de segmentación puede producir errores en la detección de fronteras de frases. También los números de página, capítulos, secciones o índices producen errores. No se realizó ningún proceso manual de verificación, por lo que a veces se introducen informaciones indeseables: copyrights, datos de la edición u otros. Estas son, sin embargo, las condiciones que presenta un corpus literario real.
Corpus utilizados ::: Corpus 8KF
Un corpus heterogéneo de casi 8 000 frases literarias fue constituido manualmente a partir de poemas, discursos, citas, cuentos y otras obras. Se evitaron cuidadosamente las frases de lengua general, y también aquellas demasiado cortas ($N \le 3$ palabras) o demasiado largas ($N \ge 30$ palabras). El vocabulario empleado es complejo y estético, además que el uso de ciertas figuras literarias como la rima, la anáfora, la metáfora y otras pueden ser observadas en estas frases.
Las características del corpus 8KF se muestran en la Tabla TABREF6. Este corpus fue utilizado principalmente en los dos modelos generativos: modelo basado en cadenas de Markov (Sección SECREF13) y modelo basado en la generación de Texto enlatado (Canned Text, Sección SECREF15).
Output:
| [
"What datasets are used?"
] | task461-6e548c16c2804a7ebf003461d4a5d32d |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Text Representation.
The model, implemented as a deep neural network, learns to respond by training on hundreds of millions context-reply $(c,r)$ pairs. First, similar to Henderson:2017arxiv, raw text from both $c$ and $r$ is converted to unigrams and bigrams. Photo Representation.
Photos are represented using convolutional neural net (CNN) models pretrained on ImageNet BIBREF17. We use a MobileNet model with a depth multiplier of 1.4, and an input dimension of $224 \times 224$ pixels as in BIBREF18.
Output:
| [
"How does PolyResponse architecture look like?"
] | task461-00d880a54eff4a52bffbe6108b3cdbc7 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive).
Output:
| [
"How does their model learn using mostly raw data?"
] | task461-1225c1e8346f4f1aae082c2c67c13aaa |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Analysis of challenges from UTD
Our system relies on the pseudotext produced by ZRTools (the only freely available UTD system we are aware of), which presents several challenges for MT. Assigning wrong words to a cluster
Since UTD is unsupervised, the discovered clusters are noisy. Splitting words across different clusters
Although most UTD matches are across speakers, recall of cross-speaker matches is lower than for same-speaker matches. As a result, the same word from different speakers often appears in multiple clusters, preventing the model from learning good translations. UTD is sparse, giving low coverage We found that the patterns discovered by ZRTools match only 28% of the audio. This low coverage reduces training data size, affects alignment quality, and adversely affects translation, which is only possible when pseudoterms are present.
Output:
| [
"what challenges are identified?"
] | task461-273617640d99401583fd93b6ab27fd03 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Corpus Embedding
Suppose that INLINEFORM0 is the target low-resource corpus, we are interested in optimizing the acoustic model with a much larger training corpora set INLINEFORM1 where INLINEFORM2 is the number of corpora and INLINEFORM3 . Each corpus INLINEFORM4 is a collection of INLINEFORM5 pairs where INLINEFORM6 is the input features and INLINEFORM7 is its target.
Our purpose here is to compute the embedding INLINEFORM0 for each corpus INLINEFORM1 where INLINEFORM2 is expected to encode information about its corpus INLINEFORM3 . Those embeddings can be jointly trained with the standard multilingual model BIBREF4 . First, the embedding matrix INLINEFORM4 for all corpora is initialized, the INLINEFORM5 -th row of INLINEFORM6 is corresponding to the embedding INLINEFORM7 of the corpus INLINEFORM8 . Next, during the training phase, INLINEFORM9 can be used to bias the input feature INLINEFORM10 as follows. DISPLAYFORM0
where INLINEFORM0 is an utterance sampled randomly from INLINEFORM1 , INLINEFORM2 is its hidden features, INLINEFORM3 is the parameter of the acoustic model and Encoder is the stacked bidirectional LSTM as shown in Figure. FIGREF5 . Next, we apply the language specific softmax to compute logits INLINEFORM4 and optimize them with the CTC objective BIBREF29 . The embedding matrix INLINEFORM5 can be optimized together with the model during the training process.
Output:
| [
"How do they compute corpus-level embeddings?"
] | task461-2a49817d0ae1475586656b948ce18444 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 . Soft attention Hard Stochastic attention Local Attention
Output:
| [
"Which attention mechanisms do they compare?"
] | task461-467d30b8ba30488a8745cba310934efc |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.
Output:
| [
"What datasets did they use?",
"Which social media platform is explored?"
] | task461-f0c5a9ccf4414f6a83dd4f192062d25a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We construct three datasets based on IMDB reviews and Yelp reviews. The IMDB dataset is binarised and split into a training and test set, each with 25K reviews (2K reviews from the training set are reserved for development). For Yelp, we binarise the ratings, and create 2 datasets, where we keep only reviews with $\le $ 50 tokens (yelp50) and $\le $200 tokens (yelp200).
Output:
| [
"What datasets do they use?"
] | task461-556a79942ff34a559cc520e130ef837f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We conduct our experiments using the evaluation methodology of SemEval 2010 Task 14: Word Sense Induction & Disambiguation BIBREF5 .
Output:
| [
"What evaluation is conducted?"
] | task461-9a60ef442cfc45fb926516aaf7f4f8e5 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns.
Output:
| [
"In the targeted data collection approach, what type of data is targetted?"
] | task461-ebe536636c92483da3b45775f7baf3fc |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links.
Output:
| [
"What other natural processing tasks authors think could be studied by using word embeddings?"
] | task461-8751191efcf14dd39af55c4153736cdb |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The dataset contains approximately 30M utterances of fictional characters.
Output:
| [
"How large is the dataset?"
] | task461-bad75196bcb94296aea85a4a394eed80 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 .
Output:
| [
"Why does not the approach from English work on other languages?"
] | task461-4daba55695cc4a3bb454ee79be02a0c2 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks.
Output:
| [
"What are three main machine translation tasks?"
] | task461-847fe25c90fa482090bfb025a1fecb36 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: As fig:fit shows, estimated test accuracy is highly correlated with actual test accuracy for various datasets, with worst-case values $\mu <1\%$ and $\sigma <5\%$ . Note that the number of free parameters is small ($||\le 6$) compared to the number of points (42–49 model-data configurations), demonstrating the appropriateness of the proposed function for modeling the complex error landscape.
Output:
| [
"What is proof that proposed functional form approximates well generalization error in practice?"
] | task461-2a5a69f945574cf18484b20483df13a4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: the new DMN+ model does not require that supporting facts In addition, we introduce a new input module to represent images.
Output:
| [
"What improvements they did for DMN?"
] | task461-f6315ad402ad4287a3360494c4d71f8f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries.
Output:
| [
"What new interesting tasks can be solved based on the uncanny semantic structures of the embedding space?"
] | task461-37885c803a9e4bf58653ccd3ed25c135 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We evalute Bertram on the WNLaMPro dataset of BIBREF0.
Output:
| [
"What is dataset for word probing task?"
] | task461-5efe834ae88742ee966003443e91eb31 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Finally, we transform text to be classified into scalars representing their distance from the constructed hate vector and use these as input to a Random Forest classifier.
Output:
| [
"What classifier did they use?"
] | task461-832c26191b4c4e28a19f31b7a0b93c64 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Given that there is no public dataset available with financial intents in Portuguese, we have employed the incremental approach to create our own training set for the Intent Classifier. We have created domain-specific word vectors by considering a set 246,945 documents, corresponding to of 184,001 Twitter posts and and 62,949 news articles, all related to finance .
Output:
| [
"What datasets are used?"
] | task461-2a8ea13967d74dcc8ecf6e6805ffefac |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For the Russian language, with its rich morphology, lemmatizing the training and testing data for ELMo representations yields small but consistent improvements in the WSD task.
Output:
| [
"What other examples of morphologically-rich languages do the authors give?"
] | task461-990384d79adc4a1c83590d0a6194943c |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For the shared task, a balanced dataset of 2,396 ironic and 2,396 non-ironic tweets is provided. The ironic corpus was constructed by collecting self-annotated tweets with the hashtags #irony, #sarcasm and #not. The tweets were then cleaned and manually checked and labeled, using a fine-grained annotation scheme BIBREF3 .
Output:
| [
"What is the size of the dataset?"
] | task461-1df82cf36069438a917573b3fb8ee78e |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: n-gram: We consider 3-gram and 4-gram conditional language model baseline with interpolation. We use random grid search for the best coefficients for the interpolated model.
Convolution: We use the fconv architecture BIBREF24 and default hyperparameters from the fairseq BIBREF25 framework. We train the network with ADAM optimizer BIBREF26 with learning rate of 0.25 and dropout probability set to 0.2.
LSTM: We consider LSTM models BIBREF27 with and without attention BIBREF28 and use the tensor2tensor BIBREF29 framework for the LSTM baselines. We use a two-layer LSTM network for both the encoder and the decoder with 128 dimensional hidden vectors.
Transformer: As with LSTMs, we use the tensor2tensor framework for the Transformer model. Our Transformer BIBREF21 model uses 256 dimensions for both input embedding and hidden state, 2 layers and 4 attention heads. For both LSTMs and Transformer, we train the model with ADAM optimizer ($\beta _{1} = 0.85$, $\beta _{2} = 0.997$) and dropout probability set to 0.2.
GPT-2: Apart from supervised seq2seq models, we also include results from pre-trained GPT-2 BIBREF30 containing 117M parameters.
Output:
| [
"What baseline models are offered?"
] | task461-151fe5a0d1f04080b2a51e95c4690eca |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words.
Output:
| [
"What is the size of this dataset?"
] | task461-30e70244da134379a2f26743b689ff49 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: This section introduces our probabilistic model that infers keyword expectation and trains the target model simultaneously.
Output:
| [
"What type of classifiers are used?"
] | task461-086bd69d122d4a3795e73c987a011c38 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our data has been developed by crawling and pre-processing an OSG web forum. The forum has a great variety of different groups such as depression, anxiety, stress, relationship, cancer, sexually transmitted diseases, etc.
Output:
| [
"How did they obtain the OSG dataset?"
] | task461-ce336fd18fc845919e21023adc8ba789 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our Reddit data was gathered using Reddit's public API, collecting the most recent jokes. Every time the scraper ran, it also updated the upvote score of the previously gathered jokes. This data collection occurred every hour through the months of March and April 2019. Since the data was already split into body and punchline sections from Reddit, we created separate datasets containing the body of the joke exclusively and the punchline of the joke exclusively. Additionally, we created a dataset that combined the body and punchline together.
Some sample jokes are shown in Table 1, above. The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes.
Output:
| [
"How they evaluate if joke is humorous or not?"
] | task461-f45f6ae2a3884e3b865662d04c4bc35d |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We evaluate our method in translating English into three morphologically-rich languages each with a distinct morphological typology: Arabic, Czech and Turkish, and show that our model is able to obtain better translation accuracy and generalization capacity than conventional approaches to open-vocabulary NMT.
Output:
| [
"What are the three languages studied in the paper?"
] | task461-6e64313266ba429bbc795060ffbddcfd |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For answer retrieval, a dataset is created by INLINEFORM4 , which gives INLINEFORM5 accuracy and INLINEFORM6 coverage, respectively.
Output:
| [
"Do they employ their indexing-based method to create a sample of a QA Wikipedia dataset?"
] | task461-9dd59e942e014d1fbf62e4bf01a698bc |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 .
Output:
| [
"Which dataset do they use?"
] | task461-cc5192d4262b47d9aa4db3ffe4997808 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: This study focuses on Switchboard-300, a standard 300-hour English conversational speech recognition task.
Output:
| [
"How big is Switchboard-300 database?"
] | task461-046ff87be6774bc18079c7607acfd39a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Table TABREF23 shows the unlabeled INLINEFORM0 scores for our models and various baselines.
Output:
| [
"what were the evaluation metrics?"
] | task461-10c4848e5e1b45b29a4bfad04bcefcf8 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In other words, there exist a few directions in the embedding space which disproportionately explain the variance. It is clear that the visual features in the case of How2 dataset are much more dominated by the "common" dimensions, when compared to the Multi30k dataset. Further, this analysis is still at the sentence level, i.e. the visual features are much less discriminative among individual sentences, further aggravating the problem at the token level. This suggests that the existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT, since they won't provide discriminativeness among the vocabulary elements at the token level during prediction.
Output:
| [
"What is result of their Principal Component Analysis?"
] | task461-ba4057a9b5764a1788e2fdfe18b4be3f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768.
Output:
| [
"What BERT model do they test?"
] | task461-d13413af9591442cb6b2376910c2cdbc |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We utilize the VATEX dataset for video captioning, which contains over 41,250 videos and 825,000 captions in both English and Chinese. Among the captions, there are over 206,000 English-Chinese parallel translation pairs. It covers 600 human activities and a variety of video content. Each video is paired with 10 English and 10 Chinese diverse captions. We follow the official split with 25,991 videos for training, 3,000 videos for validation and 6,000 public test videos for final testing.
Output:
| [
"How big is the dataset used?"
] | task461-9816da3179874197a445c2d7368e4ab7 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Baselines. We use one strong non-DNN baseline, NBSVM (with unigrams or bigrams features) BIBREF23 and six DNN baselines. The first DNN baseline is CNN BIBREF25, which does not handle noisy labels. The other five were designed to handle noisy labels.
The comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop.
Output:
| [
"Is the model evaluated against a CNN baseline?"
] | task461-bd150def376e49a2a98e58f0866d17d1 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Following the formula, we can calculate the contribution of every input word makes to every output word, forming a contribution matrix of size $M \times N$, where $N$ is the output sentence length. Given the contribution matrix, we can obtain the word importance of each input word to the entire output sentence. To this end, for each input word, we first aggregate its contribution values to all output words by the sum operation, and then normalize all sums through the Softmax function.
Output:
| [
"How do their models decide how much improtance to give to the output words?"
] | task461-5f8be80971cf4056ba940d7e865f9a64 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: A growing body of evidence shows that state-of-the-art models learn to exploit spurious statistical patterns in datasets BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, instead of learning meaning in the flexible and generalizable way that humans do. Given this, human annotators—be they seasoned NLP researchers or non-experts—might easily be able to construct examples that expose model brittleness.
Output:
| [
"What are the weaknesses found by non-expert annotators of current state-of-the-art NLI models?"
] | task461-f8ff5f5489f0430e9f8beb287ab66d72 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics BIBREF31. Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage.
Output:
| [
"What are the key issues around whether the gold standard data produced in such an annotation is reliable? "
] | task461-32323413e9b846939ca59f0215a1c4b0 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below.
Output:
| [
"what keyphrase extraction models were reassessed?"
] | task461-472967b10d874beea6807775205d7927 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We first describe our models individually and then the ensembling technique that we employ. In the following, MN denotes Memory Networks to encode conversational history, RCNN signify R-CNN for object level representations of an image, Wt represents additional linear layer in the decoder, and LF a late fusion mechanism as defined in BIBREF0.
Models ::: LF-RCNN
Late fusion encoder BIBREF0 with concatenated history. We use two-layered LSTMs with 512 hidden units for embedding questions and history. The object-level features are weighed using only question embeddings. The word embeddings from Glove vectors are frozen and are not fine-tuned. Figure FIGREF6 gives an overview of the architecture.
Models ::: MN-RCNN
Memory network encoder BIBREF0 with bi-directional GRUs and word embeddings fine-tuned. Object-level features are weighed by question and caption embedding. The rest of the scheme is same as above. (Figure FIGREF6)
Models ::: MN-RCNN-Wt
Same as above but with an additional linear layer applied to the dot product of candidate answer and encoder output, and gated using tanh function. Compare Figure FIGREF6 with Figure FIGREF6
Output:
| [
"Which three discriminative models did they use?"
] | task461-5826121a2425423bbf9cc6269671feee |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We choose the following three models as the baselines:
K-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF.
LEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration.
DPEMM BIBREF14 is a non-parametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration.
Output:
| [
"What baseline approaches does this approach out-perform?"
] | task461-89e34d4290df4c7eaaa8a929341ec104 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials.
Output:
| [
"How was performance of previously proposed rules at very large scale?"
] | task461-53952f28392c49f89b3d67fcecfdecdc |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We compare with the following baselines:
(1) Naive: A non-domain-adaptive baseline with bag-of-words representations and SVM classifier trained on the source domain.
(2) mSDA BIBREF7 : This is the state-of-the-art method based on discrete input features. Top 1000 bag-of-words features are kept as pivot features. We set the number of stacked layers to 3 and the corruption probability to 0.5.
(3) NaiveNN: This is a non-domain-adaptive CNN trained on source domain, which is a variant of our model by setting INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 to zeros.
(4) AuxNN BIBREF4 : This is a neural model that exploits auxiliary tasks, which has achieved state-of-the-art results on cross-domain sentiment classification. The sentence encoder used in this model is the same as ours.
(5) ADAN BIBREF16 : This method exploits adversarial training to reduce representation difference between domains. The original paper uses a simple feedforward network as encoder. For fair comparison, we replace it with our CNN-based encoder. We train 5 iterations on the discriminator per iteration on the encoder and sentiment classifier as suggested in their paper.
(6) MMD: MMD has been widely used for minimizing domain discrepancy on images. In those works BIBREF9 , BIBREF13 , variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized. In NLP, adding more layers of CNNs may not be very helpful and thus those models from image-related tasks can not be directly applied to our problem. To compare with MMD-based method, we train a model that jointly minimize the classification loss INLINEFORM0 on the source domain and MMD between INLINEFORM1 and INLINEFORM2 . For computing MMD, we use a Gaussian RBF which is a common choice for characteristic kernel.
Output:
| [
"What are the baseline methods?"
] | task461-4eb248cfc6944c1a9f8fd501cd08c98e |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Given a proof state $= (_, _)$ , where $_$ and $_$ denote a substitution set and a proof score, respectively, unification is computed as follows: The resulting proof score of $g$ is given by:
$$ \begin{aligned} \max _{f \in \mathcal {K}} & \; {unify}_(g, [f_{p}, f_{s}, f_{o}], (\emptyset , )) \\ & = \max _{f \in \mathcal {K}} \; \min \big \lbrace , \operatorname{k}(_{\scriptsize {grandpaOf}:}, _{f_{p}:}),\\ &\qquad \qquad \qquad \operatorname{k}(_{{abe}:}, _{f_{s}:}), \operatorname{k}(_{{bart}:}, _{f_{o}:}) \big \rbrace , \end{aligned}$$ (Eq. 3)
where $f \triangleq [f_{p}, f_{s}, f_{o}]$ is a fact in $\mathcal {K}$ denoting a relationship of type $f_{p}$ between $f_{s}$ and $f_{o}$ , $_{s:}$ is the embedding representation of a symbol $s$ , $$ denotes the initial proof score, and $\operatorname{k}({}\cdot {}, {}\cdot {})$ denotes the RBF kernel.
Output:
| [
"How are proof scores calculated?"
] | task461-9136092ef5f647c68782e103980d345a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection.
Output:
| [
"How was this data collected?"
] | task461-9d210583784d4f71a117b67866440bf6 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The accuracy of our model is 7.8% higher than the best result achieved by LSVM. The results show that this model can perform better than state-of-the-art baselines including hybrid CNN BIBREF15 and LSTM with attention BIBREF16 by 3.1% on the validation set and 1% on the test set.
Output:
| [
"What are state of the art methods authors compare their work with? "
] | task461-a739a46fb899481589a8212991a21bf7 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Training and testing are done in alternating steps: In each epoch, for training, we first present to an LSTM network 1000 samples in a given language, which are generated according to a certain discrete probability distribution supported on a closed finite interval. We then freeze all the weights in our model, exhaustively enumerate all the sequences in the language by their lengths, and determine the first $k$ shortest sequences whose outputs the model produces inaccurately. experimented with 1, 2, 3, and 36 hidden units for $a^n b^n$ ; 2, 3, 4, and 36 hidden units for $a^n b^n c^n$ ; and 3, 4, 5, and 36 hidden units for $a^n b^n c^n d^n$ . Following the traditional approach adopted by BIBREF7 , BIBREF12 , BIBREF9 and many other studies, we train our neural network as follows. At each time step, we present one input character to our model and then ask it to predict the set of next possible characters, based on the current character and the prior hidden states. Given a vocabulary $\mathcal {V}^{(i)}$ of size $d$ , we use a one-hot representation to encode the input values; therefore, all the input vectors are $d$ -dimensional binary vectors. The output values are $(d+1)$ -dimensional though, since they may further contain the termination symbol $\dashv $ , in addition to the symbols in $\mathcal {V}^{(i)}$ . The output values are not always one-hot encoded, because there can be multiple possibilities for the next character in the sequence, therefore we instead use a $k$ -hot representation to encode the output values. Our objective is to minimize the mean-squared error (MSE) of the sequence predictions.
Output:
| [
"What training settings did they try?"
] | task461-cebdcf77d58347fc88b54e345b813416 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Following these methods, we evaluate NCEL on the following five datasets: (1) CoNLL-YAGO BIBREF22 : the CoNLL 2003 shared task including testa of 4791 mentions in 216 documents, and testb of 4485 mentions in 213 documents. (2) TAC2010 BIBREF39 : constructed for the Text Analysis Conference that comprises 676 mentions in 352 documents for testing. (3) ACE2004 BIBREF23 : a subset of ACE2004 co-reference documents including 248 mentions in 35 documents, which is annotated by Amazon Mechanical Turk. (4) AQUAINT BIBREF40 : 50 news articles including 699 mentions from three different news agencies. (5) WW BIBREF19 : a new benchmark with balanced prior distributions of mentions, leading to a hard case of disambiguation. It has 6374 mentions in 310 documents automatically extracted from Wikipedia.
Output:
| [
"Which datasets do they use?"
] | task461-f0442cb29f0b4582a220c643732f31b4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We defined the reward as being 1 for successfully completing the task, and 0 otherwise. A discount of $0.95$ was used to incentivize the system to complete dialogs faster rather than slower, yielding return 0 for failed dialogs, and $G = 0.95^{T-1}$ for successful dialogs, where $T$ is the number of system turns in the dialog. 0.95^{T-1} reward 0.95^{T-1}
Output:
| [
"What is the reward model for the reinforcement learning appraoch?"
] | task461-93271390f83c46e1aeaef757619b0948 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph BIBREF14, a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The MST-kNN graph is then analysed with Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18, a multi-resolution graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity.
Output:
| [
"What cluster identification method is used in this paper?"
] | task461-741378f6336b4f12b220ff72a24ef41b |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task.
Output:
| [
"Where did this model place in the final evaluation of the shared task?"
] | task461-179e4927fba74d70a73653f2b1366320 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We show the final results for opinion recommendation, comparing our proposed model with the following state-of-the-art baseline systems:
RS-Average is the widely-adopted baseline (e.g., by Yelp.com), using the averaged review scores as the final score.
RS-Linear estimates the rating score that a user would give by INLINEFORM0 BIBREF49 , where INLINEFORM1 and INLINEFORM2 are the the training deviations of the user INLINEFORM3 and the product INLINEFORM4 , respectively.
RS-Item applies INLINEFORM0 NN to estimate the rating score BIBREF50 . We choose the cosine similarity between INLINEFORM1 to measure the distance between product.
RS-MF is a state-of-the-art recommendation model, which uses matrix factorisation to predict rating score BIBREF8 , BIBREF41 , BIBREF25 .
Sum-Opinosis uses a graph-based framework to generate abstractive summarisation given redundant opinions BIBREF51 .
Sum-LSTM-Att is a state-of-the-art neural abstractive summariser, which uses an attentional neural model to consolidate information from multiple text sources, generating summaries using LSTM decoding BIBREF44 , BIBREF3 .
All the baseline models are single-task models, without considering rating and summarisation prediction jointly. The results are shown in Table TABREF46 .
Output:
| [
"What are the baselines?"
] | task461-918c2d5f164c4cb68591c5156a4a4e5d |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space.
Several modification types were specifically selected to constitute a thorough test of embeddings. In different meaning, the annotators should create a sentence with some other meaning using the same words as the original sentence. Other transformations which should be difficult for embeddings include minimal change, in which the sentence meaning should be significantly changed by using only very small modification, or nonsense, in which words of the source sentence should be shuffled so that it is grammatically correct, but without any sense.
Output:
| [
"What annotations are available in the dataset?"
] | task461-54f1ab850dac4c6d9bff77d3c3b14b0a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.
Output:
| [
"What data did they use?"
] | task461-bbfc888cf3d2412e93e6c53e1afb144d |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We first evaluate our model on MultiWOZ 2.0 dataset as shown in Table TABREF16. We compare with five published baselines. TRADE BIBREF3 is the current published state-of-the-art model. Table TABREF24 shows the results on WOZ $2.0$ dataset. We compare with four published baselines. SUMBT BIBREF17 is the current state-of-the-art model on WOZ 2.0 dataset.
Output:
| [
"What is current state-of-the-art model?"
] | task461-ee6c5010a05f462c9f1efddff3c5ad3f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our attempt at language pre-training fell short of our expectations in all but one tested dataset. Our pre-training was unsuccessful in improving accuracy, even when applied to networks larger than those reported.
Output:
| [
"Does pre-training on general text corpus improve performance?"
] | task461-70fc8bdfb740482fafe6a3d359bb553e |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In ConMask, we use a similar idea to select the most related words given some relationship and mask irrelevant words by assigning a relationship-dependent similarity score to words in the given entity description. ConMask selects words that are related to the given relationship to mitigate the inclusion of irrelevant and noisy words.
Output:
| [
"Can the model add new relations to the knowledge graph, or just new entities?"
] | task461-94b3cf1dcdc640e9aec78944cd4acc69 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: BIBREF4 contains a variety of news-related information such as images, captions, geo-location information and comments which could be used as a proxy for article popularity. The articles in this dataset were collected between January and December 2014. Although we tried to retrieve the entire dataset, we were able to download only 38,182 articles due to the dead links published in the dataset. The retrieved articles were published in main news channels, such as Yahoo News, The Guardian or The Washington Post. Similarly, to The NowThisNews dataset we normalize the data by grouping articles per publisher, and classifying them as popular, when the number of comments exceeds the median value for given publisher.
Output:
| [
"What is the source of the news articles?"
] | task461-70e63d1587a94c04b05d508a700b0695 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Despite this, we found that the training of our embeddings was not considerably slower than the training of order-2 equivalents such as SGNS. Explicitly, our GPU trained CBOW vectors (using the experimental settings found below) in 3568 seconds, whereas training CP-S and JCP-S took 6786 and 8686 seconds respectively.
Output:
| [
"Do they measure computation time of their factorizations compared to other word embeddings?"
] | task461-f6f5e4f5bd994ab689498d6ce01078fd |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In Table TABREF26 we report the results of our model on the three datasets standardly used for the evaluation of emotion classification, which we have described in Section SECREF3 .
Output:
| [
"What was their performance on emotion detection?"
] | task461-0aca5bdff95147ca8aeed0bb16f83192 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01.
Output:
| [
"Do they conduct any human evaluation?"
] | task461-4795e77518174ffd9ef0a67941aa9290 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The first, referred to here as seq2seq, is the standard RNN-based neural machine translation system with attention BIBREF0 . This baseline does not use the parsed data. The multi-source systems improve strongly over both baselines, with improvements of up to 1.5 BLEU over the seq2seq baseline and up to 1.1 BLEU over the parse2seq baseline.
Output:
| [
"Whas is the performance drop of their model when there is no parsed input?"
] | task461-6a79ef8317744181acb06432f8feebb8 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We extracted Japanese word pairs from the Evaluation Dataset of Japanese Lexical Simplification kodaira.
Output:
| [
"where does the data come from?"
] | task461-5048958f5ccd4da08d6bc95bb150c0ab |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
Output:
| [
"Can the method answer multi-hop questions?",
"Is their method capable of multi-hop reasoning?"
] | task461-2b9360d4a3104b13887cc50ee9e09a51 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254.
Output:
| [
"Which baselines did they compare?"
] | task461-03cd4e4b81774cb7a0ddf61290b0ceba |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Introduced by BIBREF16 , in this dataset, the input query is the concatenation of a Wikipedia article title with the title of one of its section. The relevant passages are the paragraphs within that section. The corpus consists of all of the English Wikipedia paragraphs, except the abstracts. The released dataset has five predefined folds, and we use the first four as a training set (approximately 3M queries), and the remaining as a validation set (approximately 700k queries). The test set is the same one used to evaluate the submissions to TREC-CAR 2017 (approx. 1,800 queries).
Output:
| [
"What is the TREC-CAR dataset?"
] | task461-8c2ddbf99b3845a485e78f07359f4238 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Table TABREF32 shows the perplexity of the models on each of the test sets. As expected, each model performs better on the test set they have been trained on. When applied to a different test set, both see an increase in perplexity. However, the Leipzig model seems to have more trouble generalizing: its perplexity nearly doubles on the SwissCrawl test set and raises by twenty on the combined test set.
Output:
| [
"How is language modelling evaluated?"
] | task461-08c235edd87942eb8c6a5d50238db0ff |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset. We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on.
Output:
| [
"What was state of the art on SemEval-2014 task4 Restaurant and Laptop dataset?"
] | task461-4e711a7f6f8c428d80ec2ce54827a08f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this study, we investigate the efficacy of bias reduction during training by introducing a new loss function which encourages the language model to equalize the probabilities of predicting gendered word pairs like he and she.
Output:
| [
"what kinds of male and female words are looked at?"
] | task461-38738696beee41439c8408c24f5d0ff6 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We model the ASP placement task as a successor of the AEP task. For all the `relevant' news entity pairs, the task is to determine the correct entity section. Each section in a Wikipedia entity page represents a different topic. For example, Barack Obama has the sections `Early Life', `Presidency', `Family and Personal Life' etc. Article-Section Ground-truth. The dataset consists of the triple INLINEFORM0 , where INLINEFORM1 , where we assume that INLINEFORM2 has already been determined as relevant. We therefore have a multi-class classification problem where we need to determine the section of INLINEFORM3 where INLINEFORM4 is cited.
Output:
| [
"How do they determine the exact section to use the input article?"
] | task461-dbe837b7c04e401eb38906fa7953a0f8 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Recently, Wen et al. wensclstm15 proposed an RNN-based approach, which outperformed previous methods on several metrics. However, the generated sentences often did not include all desired attributes.
Output:
| [
"How is some information lost in the RNN-based generation models?"
] | task461-5645373915c6442ca2bdc4d6fa2fd206 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We use the publicly available dataset KVRET BIBREF5 in our experiments. There are 2,425 dialogues for training, 302 for validation and 302 for testing, as shown in the upper half of Table TABREF12.
Output:
| [
"What is the size of the dataset?"
] | task461-2d2f67bb07314b908c6a4039e9fb6b85 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total.
Output:
| [
"How large is the corpus?"
] | task461-2e1beff893c64f92a93b6400699e1237 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We follow a similar evaluation protocol to those presented in BIBREF6 , BIBREF8 , BIBREF9 which is to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks BIBREF36 , BIBREF37 .
Output:
| [
"How do they evaluate their sentence representations?"
] | task461-f8a4295364f141e6b163592313677f27 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this paper, we use three encoders (NBOW, LSTM and attentive LSTM) to model the text descriptions.
Output:
| [
"What neural models are used to encode the text?"
] | task461-12441ea54fd74bc89edb0647ed1761e9 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model.
Output:
| [
"What is the baseline method for the task?"
] | task461-c244aa4a1bf44b9395c3f890e5ba1f58 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Finally, we combine our CNN and RNN models using a voting process. For each sentence in the test set, we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes. In case of a tie, we pick one of the most frequent classes randomly.
Output:
| [
"How does their simple voting scheme work?"
] | task461-3ece93571fcf459e8cb72ac735c5d5b3 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In the first scenario we equip the decoder with an additional morphology table including target-side affixes.
Output:
| [
"What type of morphological information is contained in the \"morphology table\"?"
] | task461-2b50226580a8423aa76e8f48b2ce460f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains The evaluation of the German version is in progress.
Output:
| [
"What are the corpora used for the task?"
] | task461-451d193e1a464b89926cf4035208bf26 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our corpus comprises a total of 6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities.
Output:
| [
"How large is the dataset?"
] | task461-d0b5f781ec8f4f6ea294507f04e73890 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The end-to-end system (prefix E2E) uses the DNN topology depicted in Figure FIGREF8 . We present results with 3 distinct size configurations (infixes 700K, 318K, and 40K) each representing the number of approximate parameters, and 2 types of training recipes (suffixes 1stage and 2stage) corresponding to end-to-end and encoder+decoder respectively, as described in UID7 .
Output:
| [
"How many parameters does the presented model have?"
] | task461-6dc5e548e77341f390b42ce44915f80e |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We then train models on WMT English-German (EnDe) without BT noise, and instead explicitly tag the synthetic data with a reserved token. We repeat these experiments with WMT English-Romanian (EnRo), where NoisedBT underperforms standard BT and TaggedBT improves over both techniques. We perform our experiments on WMT18 EnDe bitext, WMT16 EnRo bitext, and WMT15 EnFr bitext respectively. We use WMT Newscrawl for monolingual data (2007-2017 for De, 2016 for Ro, 2007-2013 for En, and 2007-2014 for Fr). For bitext, we filter out empty sentences and sentences longer than 250 subwords. We remove pairs whose whitespace-tokenized length ratio is greater than 2. This results in about 5.0M pairs for EnDe, and 0.6M pairs for EnRo. We do not filter the EnFr bitext, resulting in 41M sentence pairs.
Output:
| [
"What datasets was the method evaluated on?"
] | task461-d92528e221694868a9c93359e05a87c9 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In this section, we evaluate our method and compare its performance against the competitive approaches. We use INLINEFORM0 -fold evaluation protocol with INLINEFORM1 with random dataset split. We measure the performance using standard accuracy metric which we define as a ratio between correctly classified data samples from test dataset and all test samples.
Output:
| [
"What evaluation metrics are used?"
] | task461-7d8a40a2652f4dfca1c92df545624585 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: On another note, we apply our formalization to evaluate multilingual $\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\%$ in all languages), it only encodes at most $5\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages. Finally, when put into perspective, multilingual $\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\%$ additional information.
Output:
| [
"Was any variation in results observed based on language typology?"
] | task461-945e13c8a0eb4720bf8655b59a5c41ac |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. We compare to the best n-gram-overlap metrics from toutanova2016dataset; We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy: Due to its popularity, we also performed initial experiments with BLEU BIBREF17 .
Output:
| [
"Is ROUGE their only baseline?"
] | task461-f995ec7c66fa4c0fb279b4cabfb49b0a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases.
Output:
| [
"How many paraphrases are generated per question?"
] | task461-fb829a6bef38498ea340f3ba6fee7c2d |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The ratio of correct `translations' (matches) was used as an evaluation measure.
Output:
| [
"What evaluation metric do they use?"
] | task461-9332536373b34d4c89614859e6435db7 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral
Output:
| [
"How many emotions do they look at?"
] | task461-8486832f1c1b458e9993c6803a4ec249 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: We test the character and word-level variants by predicting hashtags for a held-out test set of posts.
Output:
| [
"What other tasks do they test their method on?"
] | task461-c56967a76c4747959231798b9ba98fee |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets.
Output:
| [
"What is the size of the corpus?"
] | task461-7968ed18d66740c698ad310eba7282ac |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Manning and Schütze argue that, even though not quite correct, language text can be modeled as stationary, ergodic random processes BIBREF29, an assumption that we follow. Moreover, given the diversity of language production, we assume this stationary ergodic random process with finite alphabet $\mathcal {A}$ denoted $X = \lbrace X_i, -\infty < i < \infty \rbrace $ is non-null in the sense that always $P(x_{-m}^{-1}) > 0$ and
This is sometimes called the smoothing requirement.
Output:
| [
"Is the assumption that natural language is stationary and ergodic valid?"
] | task461-b625331e51724859aeb033f52267dc7f |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large.
Output:
| [
"What are strong baseline models in specific tasks?"
] | task461-d50d57487e4546fd8d7c1cb8d1c02469 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The top performing approaches Caravel, COAV and NNCD deserve closer attention.
Output:
| [
"Which is the best performing method?"
] | task461-dcc0f1c28f1841089a3d3283bd1b214a |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms.
Output:
| [
"How are the topics embedded in the #MeToo tweets extracted?"
] | task461-170a17fc7d254b01902253bf1e954e71 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The embeddings for words and POS tags were pre-trained on a large unannotated corpus consisting of the first 1 billion characters from Wikipedia.
Output:
| [
"Do they use pretrained models as part of their parser?"
] | task461-81c7b170c1ca4e608ab49f57fcc0c6d4 |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: Turning to semantics, using visualizations of the activations created by different pieces of text, we show suggestive evidence that BERT distinguishes word senses at a very fine level. We apply attention probes to the task of identifying the existence and type of dependency relation between two words.
Output:
| [
"How were the feature representations evaluated?"
] | task461-14a4dca8d392414ebd5ed953f5c6249b |
Definition: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions.
Positive Example 1 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: how was the dataset built?
Positive Example 2 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language are the tweets?
Negative Example 1 -
Input: Using this annotation model, we create a new large publicly available dataset of English tweets.
Output: In what language is tweets?
Negative Example 2 -
Input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Output: What is the size of the dataset?
Now complete the following example -
Input: The Logoscope retrieves newspaper articles from several RSS feeds in French on a daily basis.
Output:
| [
"How often are the newspaper websites crawled daily?"
] | task461-bf7cedbb25704eafb09bbd25e0b88c96 |
Subsets and Splits