id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_92200 | This work was supported by Contracts HR0011-15-C-0113 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). | this isolates the importance of dealing with casing, and makes our observations applicable to situations where modifying the model is not feasible, but retraining is possible. | neutral |
train_92201 | For sentences involving reporting verbs like said, told, asked, etc., some systems annotate additional attributional context for every utterance (Mausam et al., 2012). | this can be attributed to the fact that this dataset was not manually curated for Open IE, rather QA-SRL data was adapted for this task. | neutral |
train_92202 | In original OIE2016 paper, the authors "match an automated extraction with a gold proposition if both agree on the grammatical head of all of their elements (predicate and arguments)". | it does not reward partial coverage of gold tuples, and forces one system prediction to match just one gold. | neutral |
train_92203 | We see that the model primarily loses out on the text based measure, and performs quite well on the type based one. | this issue can potentially be mitigated with either a more sophisticated tagger module or a better stopword/symbol removal mechanism, which we leave to future work. | neutral |
train_92204 | This results in the model picking out spurious words alongside the actual entity, which necessitates the use of stop-word and symbol removal. | we observe that our proposed approach significantly outperforms the baseline, and performs reasonably well when compared to various stateof-the-art supervised approaches that use significantly more ground-truth annotation information. | neutral |
train_92205 | We experiment both with using the embeddings as is, and fine-tuning the top layers of the attention encoder (we only try fine-tuning the top, toptwo and top-three layers due to computational constraints). | this usually requires a large amount of fine-grained data annotated at the token level, which in turn can be expensive and cumbersome to obtain. | neutral |
train_92206 | The motivation is that the EUs cannot cross over a sentence. | this study considers five types of EUs. | neutral |
train_92207 | It would be very difficult to infer the correct stance label if we do not consider the hashtags "#NoHillary" and "#VoteGOP." | our full model is called AT-JSS-Lex. | neutral |
train_92208 | First, an ablation experiment is used to determine the importance of each component of our proposed model for stance detection: • AT-JSS-Lex is a lexicon integrated multi-task model with attention mechanisms. | we construct a stance lexicon for each target from the training data available from the Se-mEval 2016 dataset and from an extra 1,000 tweets for each target that we collected from Twitter using specific hashtags. | neutral |
train_92209 | We compare our method against three different baselines. | they delete sentiment-specific phrases (e.g., "works great") from a given sen-(a) the service was great too → the service was n't too great (b) this one is by far the worst → this one is the best by far Figure 1: the retrieved sentences (right) from the opposite corpora given the query sentences (left). | neutral |
train_92210 | Given a corpora with vocabulary size V , we represent each token x i ∈ {0, 1} V as a one-hot vector with a single nonzero entry at v. Here, v ∈ V represents the word type at position i in the text. | substantive interpretability is key for CssDH research, and thus, the lack thereof is a major limitation. | neutral |
train_92211 | We examine the proposed priors using three commonly sized English corpora for textual analysis within CSSDH: the top 100 list of books in Project Gutenberg (2019), a sample from Twitter (Go et al., 2009) and the U.S. Congress Speeches 1981-2016 (Gentzkow et al., 2018). | consequently, the dot product for these vectors will be 0 for all dimensions except K, and thus the effect of the anchored word types on the rest of the vocabulary will exclusively depend on K. this implies that, as γ, ω → 0, word types within V − and V + obtain exactly the same word embedding. | neutral |
train_92212 | In socio-linguistic literature even though there are many studies that observe propagation of hate speech (Ribeiro et al., 2018;Salminen et al., 2018) and abusive behaviour (Founta et al., 2018;Maity et al., 2018;Mathew et al., 2019a,b) in social media an in-depth analysis of how sarcastic message travels in social networks and how tweets around the targets behave is an area which social scientists need to investigate. | • Multiple sarcasm targets: There can be multiple sarcasm target phrases present in the sentence. | neutral |
train_92213 | Sarcasm target identification can also benefit natural language generation; for example, after detection of entity toward which a negative sentiment is expressed in a sarcastic text, a natural language generation system will have more context to generate a response. | if a user expresses a sarcastic utterance such as "My laptop has an awesome battery life that lasts for 15 minutes", the tool should recognize that the speaker is expressing a negative sentiment toward the battery life of the laptop, even though, it has a positive sentiment word 'awesome' in it. | neutral |
train_92214 | Results show that identifying informational bias poses additional difficulty and suggest future directions of encoding contextual knowledge from the full articles as well as reporting by other media. | in order to evaluate token-level prediction on the larger original test set, we conduct a pipeline experiment with the fine-tuned BERT models where sentences predicted as containing bias by the best sentence-level classifier from cross validation are tagged by the best token-level model. | neutral |
train_92215 | A visual inspection indicates that this may be attributed in part to media sources' attempts to hook readers with inflammatory speech early on (e.g., FOX: "Paul Ryan stood his ground against a barrage of Biden grins, guffaws, snickers and interruptions."). | the results reaffirm our hypothesis that while both tasks are extremely difficult, informational bias is more challenging to detect. | neutral |
train_92216 | From YouTube, we collect broadcast football transcripts and identify mentions of players, which we link to metadata Authors contributed equally. | we now demonstrate confounds in the data and revisit several established results from racial bias studies in sports broadcasting. | neutral |
train_92217 | Recently, there has been a resurgent interest in this task due to the availability of more data and new machine learning techniques (Luo et al., 2017;Zhong et al., 2018b;. | we stack multiple blocks of an LSTM layer and a charge-specific gating layer for generating a focused charge-based representation of the case description. | neutral |
train_92218 | First, the humor judgments differ from individual to individual. | for each prompt, the writing staff of the show pick a top 10 and an overall winner. | neutral |
train_92219 | In this way, we can use only a small portion of labeled data to predict the remaining unlabeled data effectively (Zhou et al., 2003). | w s (i, j) indicates the frequency that word w i and w j cooccur in s within the window H. In this way, we can capture the lexical patterns of s in w s . | neutral |
train_92220 | Different words in the same news may have different importance in representing this news. | besides, we apply additive attentions to both news and user encoders to select important words and news to learn more informative news and user representations. | neutral |
train_92221 | The proposed method benefits from multiple transferable knowledge and shows competitive performances with the state of the art using limited resources. | we replace English words with target language words if the translation pairs exist in the bilingual dictionaries (Rolston and Kirchhoff, 2016). | neutral |
train_92222 | Unfortunately, the proposed method fails to find the famous anagram for these inputs. | since the number of FSA states grows exponentially with input length, it rapidly becomes intractable. | neutral |
train_92223 | Anagram Artist can generate anagrams consisting of at most 5 words given inputs with lengths less than 25 characters. | since the superiority of neural language models over n-gram models has been reported in previous works (e.g., (Du et al., 2016;Józefowicz et al., 2016)), we conducted experiments using only ELMo. | neutral |
train_92224 | The current state-of-the-art model is based on reinforced learning using the (in)equivalence of two regular expressions. | stockmeyer and Meyer (1973) showed that it is PsPACE-complete to decide if two regular expressions generate the same set of words. | neutral |
train_92225 | The embedding dimension size is set to 4, since the size of vocabulary for regular expressions is relatively small compared to natural languages. | there are situations that even expert humans cannot accurately classify. | neutral |
train_92226 | We used Adam Optimizer to train all our models. | we dropped all clinical notes which doesn't have any chart time associated and also dropped all the patients without any notes. | neutral |
train_92227 | They are then asked to guess the L1 translation of each L2 word type that appeared at least once in the passage. | our teacher does not yet attempt to monitor the human student's actual learning. | neutral |
train_92228 | For instance, the Automatic Content Extraction (ACE) (Doddington et al., 2004) corpus contains more than 500 documents manually annotated with examples for 33 event types. | there is a long tail of examples where long range dependencies need to be resolved in order to type them correctly. | neutral |
train_92229 | Second, we adopt bidirectional LSTMs (J) for CATD-MATCH, without an obvious improvement, probably because most messages in the datasets can be fully comprehended only with previous history. | searching the entire space of T is unfeasible. | neutral |
train_92230 | If the sentiment classifying knowledge about how to comment on these concepts can be acquired, it will be helpful for sentiment classification when meeting these concepts in free texts again. | ♦ Sentiment classification is a hard task, and it needs subtly describing capability of language model. | neutral |
train_92231 | Classical technology in text categorization pays much attention to determining whether a text is related to a given topic [1], such as sports and finance. | all these work enlightened us on the research on Concerned Concepts in given domain. | neutral |
train_92232 | 'Headlines and NE' denotes the best result obtained by our method, i.e. | we explore two linguistically motivated restrictions on the set of words used for tracking: named entities and headline words. | neutral |
train_92233 | A topic is related to a specific place and time, and an event refers to notions of who(person), where(place), when(time) including what, why and how in a story. | 4) and (2) the tracking using various number of word sequences (Fig. | neutral |
train_92234 | Overall, the result of 'with hierarchy' was better than that of 'without hierarchy' in all N t values. | dragon Systems proposed two tracking systems; one is based on standard language modeling technique, i.e. | neutral |
train_92235 | In work with non-parallel corpora, contexts of source language terms and target language terms and a seed translation lexicon are combined to measure the association between the source language terms and potential translation candidates in the target language. | after retrieving some sub-documents for a given topic from the target corpus, we take a set of top ranked sub-documents, regarding them as relevant sub-documents to the query, and extract terms from these sub-documents. | neutral |
train_92236 | In the recent past, many models for automatic image annotation are limited by the scope of the representation. | this model combines both asymmetric clustering model which maps words and image regions into clusters and symmetric clustering model which models the joint distribution of words and regions. | neutral |
train_92237 | In order to generate appropriate annotations, a simple language model is developed that takes the word-correlation information into account, and then the textual description is determined not only by the model linking keywords and blob-tokens but also by the word-to-word correlation. | due to rough fixed-size rectangular grids, the extracted blocks are unable to model objects effectively, leading to poor annotation accuracy in our experiment. | neutral |
train_92238 | In this paper, the cooccurrence between nouns and verbs is measured by mutual information. | japanese relative clause modification should be classified into at least two major semantic types: case-slot gapping and head restrictive. | neutral |
train_92239 | For nouns that do not tend to be modified by 'outer' clauses, such as " "(people), " " (city), and " "(television), the ratio between the frequency and the number of verbs is almost the same between the relative clause and case-slot cases. | murata [11] presented a statistical method of classifying whether the relative clause is an 'inner' or an 'outer' clause. | neutral |
train_92240 | Given the types of Japanese relative clause constructions and a corpus of Japanese relative clause construction instances, we present a machine learning based approach to classifying RCC's. | in this paper we present eight features are effective in classifying case-slot gapping and head restrictive relative clauses. | neutral |
train_92241 | The results we obtained from evaluation revealed that our method outperformed the traditional case frame-based method, and the features that we presented were effective in identifying RCC's types. | they judged the instances as 'outer' clauses, only if case-slot filling did not succeed. | neutral |
train_92242 | The percentage of " " cooccurring with noun 7. | the accuracy of this feature is not so good compared with other features. | neutral |
train_92243 | Our interpretation of this latter result is that the primary limitation of the lexicon is coverage, despite its size. | this simple fix raised tagging accuracy to 95.9%. | neutral |
train_92244 | These dependencies are neither too fine-grained nor too coarsegrained compared with bilexical and biclass dependencies, and really help to alleviate fundamental information sparseness in treebank. | and that may be also true for discriminative models, since these models can easily incorporate richer features in a wellfounded fashion. | neutral |
train_92245 | Nominal Compounds (NCs) Disambiguation: Nominal compounds are notorious "every way ambiguous" constructions. | the coordinated structure VP is parsed as: V P 0 → V P 1 IP 2 . | neutral |
train_92246 | As this method requires only a search engine, it can segment texts that are normally difficult to process by using language tools, such as institution names (5,6), colloquial expressions (7 to 10), and even some expressions taken from Buddhist scripture (11,12). | we can also consider the reverse order, which involves calculating h for the previous element of x n . | neutral |
train_92247 | For example, it is easier to guess what comes after x 6 ="natura" than what comes after x 5 = "natur". | this assumption is illustrated in Figure 1. | neutral |
train_92248 | The reason for the choice of languages lies in the fact that the process utilised here is based on the key assumption regarding the semantic aspects of language data. | this paper verifies assumption (A) in a fundamental manner. | neutral |
train_92249 | We assumed that attributes that conformed to the naturalness criterion would be such important attributes. | we refer to this set of documents as a local document set (LD(C)). | neutral |
train_92250 | However, we think this attribute is also useful in practice for people who have a keen interest in foreign films. | figure 5 has graphs for the general attribute case the same as for the relaxed case. | neutral |
train_92251 | Furthermore, it is much easier to choose and access information for solving problems by using the answers of our QA system than by using the answers of the full text retrieval system. | our knowledge base is composed of these labeled sets of a question and its answer mail. | neutral |
train_92252 | However, these kinds of documents requires the considerable cost of developing and maintenance. | questioner's reply mails are questioner's answers to the direct answer mails. | neutral |
train_92253 | Table 4 (c) shows the number and type of confirmation labels which were given to proper answers. | the precision of the significant sentence extraction was emphasized in this task. | neutral |
train_92254 | When multiple rules match an input string at a given position, the longest-matching rule is selected. | 6, only the lexical entries for the second and the fourth morphemes in the substring are selected as additional lexical information, and none of the contexts is selected in this case. | neutral |
train_92255 | Experiments on the STEP 2000 dataset showed that the proposed method yields an F-score of 95.36%. | firstly, we explore characteristics and chunk types of Korean. | neutral |
train_92256 | Since Ramshaw and Marcus approached NP chunking using a machine learning method, many researchers have used various machine learning techniques [2,4,5,6,10,11,13,14]. | if we use more features such as semantic information or collocation, we can obtain a better performance. | neutral |
train_92257 | The real parsing performances of accepting input from automatic word segmentation and pos tagging system are shown in the Table 5. | the mother-node feature is the least effective feature, since syntactic structures do not vary too much for each phrase type while playing different grammatical functions in Chinese. | neutral |
train_92258 | The binarization method proposed in our system is different from CNF. | a better approach is to generalizing and specializing rules under linguistically-motivated way. | neutral |
train_92259 | Only the applicable patterns for each word were increased. | grammar generalization and specialization methods were discussed in section 3. | neutral |
train_92260 | In our experiments, we consider grammars suited for PCFG parsing. | they also suffer problems of over-generation and structure-ambiguity. | neutral |
train_92261 | is highly ambiguous in English texts. | it looks simple at first glance since there are a very small number of punctuations, namely, period (". | neutral |
train_92262 | This result suggests that our method could capture proper places of MCT pairs with this level of precision. | the criterion for judgment conformed to that of ordinary dictionaries, i.e., the evaluator judges whether given a word pair would be described as a synonym by an ordinary dictionary. | neutral |
train_92263 | Since our method does not cut off acquired synonyms by frequency, synonyms that appear only once can be captured. | their method considers sentential word ordering. | neutral |
train_92264 | The problem is to search the most probable class assignment for each entity and each relation of interest, given the observations of all entities and relations. | for example, if "Boston" is mislabeled as a person, it will never have chance to be classified as the location of Poe's birthplace. | neutral |
train_92265 | The ITG level of expressiveness constitutes a surprisingly broad equivalence class within the expressiveness hierarchy of transduction grammars. | ‚oe $ Ì' º « $ j , 1 -( Ï<p (If two classifier suspected person bei-particle sentence guilty, then must in Scotland serve time .) | neutral |
train_92266 | This paper reports a new way of extracting terms that is tuned for a very small corpus. | a corpus of each size was generated by singly adding an article randomly selected from the corpus of each genre. | neutral |
train_92267 | Note that the proportion of negative samples and positive samples is 1:6. | in the second step, each document identified unreliable in the first step are further processed by exploring the dependences among features. | neutral |
train_92268 | Note that the data used in the preliminary experiments described in Section 3.3 are a part of Corpus A. | examples of sp-scores include five-star and scores out of 100. | neutral |
train_92269 | The last word of a baseVP represents the entire baseVP to which it belongs. | whereas he deals only with subjects and objects as verb-noun co-occurrence, we used all the kinds of co-occurrence mentioned in Sect. | neutral |
train_92270 | In Korean, a head word usually appears after its modifying words. | 1 shows a part of decision tree learned by C4.5 algorithm. | neutral |
train_92271 | First, instead of only classifying relations by assuming all candidate relations are detected, we perform relation detection before classifying relations by regarding every pair of entities in a sentence as a target of classification. | aCE corpora consist of 519 annotated text documents assembled from a variety of sources selected from broadcast news programs, newspapers, and newswire reports [1]. | neutral |
train_92272 | As we will discuss in section 3 about features, we are working on very high dimensional feature space, and this often leads to overfitting. | for example, relations such as "a town west of Jerusalem", "park outside Paris", "his friend/wife/brother" are highly dependent on the lexical feature, i.e., "west of", "outside", and "friend/wife/brother" are the most important clue to determine the class of the relations. | neutral |
train_92273 | This means that we are not restricted to the limited types of relations defined in MUC [1] or ACE [22]. | the numbers inside parentheses in table 1 and table 2 correspond to the statistical values of the NE pair "PER-GPE", while the numbers outside parentheses are related to the NE pair "COM-COM". | neutral |
train_92274 | This approach does not rely on any annotated corpus and works effectively on high-frequent entity pairs [8]. | the replacement of summation with maximization reduces the computational time greatly. | neutral |
train_92275 | As indicated in subsection 2.1, the "Head Word" of root node defines the main meaning of a parse tree. | there are two problems in this approach: • The assumption that the same entity pairs in different sentences have the same relation. | neutral |
train_92276 | In this paper, we focus on the last learning paradigm, i.e., transductive learning. | then we combine the multiple binary classifiers and get a single classifier In the example above, a label "PARt" is eventually assigned to the tuple. | neutral |
train_92277 | If we compare the performance presented in this subsection with those of the corresponding transductive learners in the previous subsection, we observe the following pattern: NB < SGT < SNoW-SGT < SNoW < SVM-SGT < SVM With regard to the purpose of this study, again, it is most important to notice that the induction-aided transductive learners significantly outperform the "pure" transductive learner. | with the SVM-SGT learner, we get a 78.04% accuracy, and the class-specific performance is summarized in Table 3. | neutral |
train_92278 | Parse the sentences using the Charniak parser [14]. | when it comes to exploiting unlabeled data, the tradeoff between the last two is not yet well understood. | neutral |
train_92279 | -IOB-chains of the heads of the two entities, each of which is a lexicalized path, in other words, a concatenation of the syntactic categories of all the constituents on the path from the root node to this leaf node of the tree (e.g., "S/VP/NP/NN"). | it may be worth the effort to investigate other alternatives. | neutral |
train_92280 | Word unigrams, word bigrams, and semantic categories of nouns are used as features. | the decrease of F-measure suggests the effectiveness of the excluded feature set. | neutral |
train_92281 | Since whether a noun is a product name or not is important for PROD-UCT as discussed before, semantic categories of nouns are crucial to PRODUCT. | score(s i ) indicates the likeliness of s i being the core sentence. | neutral |
train_92282 | Although there is a frequently-used expression "What is [noun] -? | important clues for WAY are phrases such as "How do I". | neutral |
train_92283 | Only the features from the target sentence would not be enough for accurate classification. | in this work, we treat only questions which require one answer. | neutral |
train_92284 | Languages are unique in syllable structures. | to this, though the letter " " exists in Sinhala writing system (corresponding to the consonant sound /j/), it is not considered a phoneme in Sinhala. | neutral |
train_92285 | Hereafter, we will use a source grapheme for a source language grapheme and a target grapheme for a target language grapheme. | many previous works make use of either source grapheme or phoneme. | neutral |
train_92286 | This is because unweighted voting in this paper requires more than half of the word aligners in the ensembles to vote for the same link. | this procedure is the same as the one to construct training sets for N-fold cross-validation. | neutral |
train_92287 | Figure 1 shows some examples of morphological information by Chinese, Japanese, English and Korean morphological analyzers and Figure 2 the correspondences among the words. | under the framework of log-linear models, we investigate the effects of morphosyntactic information with word representation. | neutral |
train_92288 | As the syntactic role of each word within Japanese and Korean sentences are often marked, word order in a sentence plays a relatively small role in characterizing the syntactic function of each word than in English or Chinese sentences. | in this paper, we address the question of the effectiveness of morpho-syntactic features such as parts-of-speech, base forms, and relative positions in a chunk or an agglutinated word for improving the quality of statistical machine translations. | neutral |
train_92289 | Then in the test step, we perform morphological anlysis of a given sentence for word representation corresponding to training corpus representation. | we could get some advantageous Korean morpho-syntactic information in the Chinese-to-Korean translation, i.e., the advantage of language and translation models using morpho-syntactic information. | neutral |
train_92290 | Then we grab the phrase translation pairs that contain at least one intersected word alignment and some unioned word alignments [1]. | korean and Japanese sentences have a relatively free word order; whereas words within Chinese and English sentences adhere to a rigid order. | neutral |
train_92291 | This limits the part-of-speech of the words to be exchanged to nouns and verbs. | w k wizard utterances directly following the k highest ranking utterances Step 2: for each w i ∈ {w 1 , . | neutral |
train_92292 | If we have sufficient data for a specific question pattern like "how long," we will have more chances to obtain alignment counterparts that are effective terms for query expansion. | consider the example of gathering answer passages from the Web for the (Q, A) pair where Q = "What is the capital of Pakistan?" | neutral |
train_92293 | Perform part-of-speech tagging on Q and AP. | we want to find out what terms are associated with "how old" in the answer passages. | neutral |
train_92294 | Taking into account of the four position features, the final character tag set is comprised of 184 tags. | in hidden Markov model, given the sentence, The best character tagging result for the sentences is given by equation 2: We used the People's Daily Corpus of 1998 [13] to train this model. | neutral |
train_92295 | Our approach is better than the word-based method for two test data sets. | we tried to incorporate a lexicon to the model to improve the performance. | neutral |
train_92296 | Among all the features used, only a few might be very relevant to recognizing the non-compositionality of the MWE. | some preliminary work on recognition of V-N collocations was presented in [28]. | neutral |
train_92297 | [12] and IBM phrase-based Model 4 used in IBM method D are very similar in modeling. | x permutations of the all phrases in 1 x e % . | neutral |
train_92298 | To reduce vocabulary size and avoid sparseness, we constrain the phrase length to up to three words and the lower-frequency phrase pairs are pruned out for accurate phrase-alignment 1 . | although PM functions as an ngram language model, it only models the ordering connectivity between target language phrases, i.e., it is not in charge of target word selection. | neutral |
train_92299 | In MDS, the source documents could contain multiple topics. | she breaks each sentence into features and takes the vector product of features as follows: Feature conditional probabilities can be calculated using frequency counts of features as follows: Lapata [3] uses nouns,verbs and dependency structures as features. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.