id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_7300
Accordingly, the problem of graph parsing has been conceived in such a way that a graph structure and a graph grammar are provided as input, and the goal is to produce as output one or more grammar derivations that generate the input graph itself.
this problem setting does not apply to the scenario of semantic parsing for natural language.
contrasting
train_7301
Because of this similarity, algorithms for string-to-tree parsing that use dynamic programming can be naturally extended to string-to-graph parsing.
when we do so, we soon realize that there is a gap in computational complexity between the two cases, because of the structural differences between non-crossing trees and general graphs.
contrasting
train_7302
String-to-graph parsing is very closely related to the so-called problem of dependency semantic parsing, where one is given an ordered sequence of vertices and has to output a maximum weight graph defined over those vertices, representing a most-likely semantic analysis of the associated sentence.
with string-to-graph parsing, dependency semantic parsing does not require any hardwired input grammar generating a set of valid graphs.
contrasting
train_7303
For instance, if we parse the sentence into spans [0, 4] and [4, 5], we have I(0, 4) = {1, 3, 4} and I(4, 5) = {5}, which adds up to a bag of 4 vertices at the node with span [0,5].
we also observe that while the span [0, 4] has three connections, to outside vertices, all of the connections are made to the same vertex 5.
contrasting
train_7304
In the outside decomposition, the outside connections of span [0, 4] would not be repeatedly counted and the span would have a contribution of 1 to the total width, resulting in a decomposition with smaller width.
an inside tree decomposition can also produce a treewidth smaller than an outside tree decomposition.
contrasting
train_7305
We show that this holds even when the multilingual corpus has been translated into English, by picking up the faint signal left by the source languages.
just as it is a thorny problem to separate semantic from syntactic similarity in word representations, it is not obvious what type of similarity is captured by language representations.
contrasting
train_7306
In this work we follow up on the findings of Rabinovich, Ordan, and Wintner (2017), who, by using language representations consisting of manually specified feature vectors, find that the structure of a language representation space is approximately preserved by translation.
their analysis only stretches as far as finding a correlation between their language representations and genetic distance, even though the latter is correlated to several other factors.
contrasting
train_7307
Given that this new vector is constituted by verbal contexts, it belongs to the same vector space as verbs, and therefore it can be combined with the word vector of catch.
c in Equation (2) represents the vector set of those nouns occurring as direct object of catch in the corpus: {train, bus, disease, .
contrasting
train_7308
The accuracy of this strategy (0.335) is higher than that obtained by the Dict-First baseline, even though there is just a slight improvement with a p-value = 0.03.
the Dict-Corpus-Based strategy is outperformed by the non-compositional baselines in a significant way (p-value = 0.006).
contrasting
train_7309
Like in our experiments in Section 6, it uses a distributional thesaurus as input, as well as multiple pre-and post-processing stages to filter the input graph and disambiguate individual nodes.
to Pachenko et al., here we directly apply the WATSET algorithm to obtain the resulting distributional semantic classes instead of using a sophisticated parametric pipeline that performs a sequence of clustering and pruning steps.
contrasting
train_7310
RuWordNet is more domain-specific in terms of vocabulary, so our input set of generic synonymy dictionaries has a limited coverage on this data set.
recall calculated on YARN is substantially higher as this resource was manually built on the basis of synonymy dictionaries used in our experiments.
contrasting
train_7311
An example of the sense graph built by WATSET for two senses of the lexical unit "java" using CW log for local clustering.
to Figure 16, in this disambiguated distributional thesaurus the node corresponding to the lexical unit "java" is split: java 11 is connected to programming languages and java 17 is connected to drinks.
contrasting
train_7312
2010) as the performance indicator.
because the average size of a cluster in this experiment is much higher (Table 18 and Figure 14), we found that the enumeration of 2-combinations of semantic class elements is not computationally tractable in reasonable time on relatively large data sets like the ones we use in this experiment.
contrasting
train_7313
Among the exponents, that of the long-range correlation, ξ, differs largely among the four data sets considered thus far.
the other exponents generally take similar values for the data sets.
contrasting
train_7314
It is therefore not a challenge for language models to satisfy Zipf's and Heaps' laws.
the metrics of long memory were capable of quantifying the quality of machine-generated texts.
contrasting
train_7315
The Taylor exponents of the n-gram language models were consistently ζ = 0.50, which indicates the absence of long memory.
the neural language models had Taylor exponents of ζ > 0.50, which indicates the presence of long memory in the generated texts (right).
contrasting
train_7316
The Taylor exponent was ζ = 0.50, and the other metrics also indicated that the generated text did not have long-range dependence.
the RNNs with a gating mechanism (LSTM, GRU, and QRNNs) could reproduce long memory behavior.
contrasting
train_7317
In terms of BLEU, the best-performing GAN model varied among RankGAN with BLEU-2, LeakGAN with BLEU-3 and BLEU4, and TextGAN with BLEU-5.
textGAN was the best model in terms of eval-AWD-LStM.
contrasting
train_7318
2017;Toral and Sanchez-Cartagena 2017).
no previous work has evaluated the performance of automatic evaluation metrics on the output of neural versus other MT approaches.
contrasting
train_7319
The plots for the different metrics look fairly similar, with the exception of BLEU.
to the rest of the metrics, BLEU assigns very low scores to the majority of translations due to the sparseness of n-gram matches with the references.
contrasting
train_7320
In such a setting, quality levels can be defined in a straightforward way based on the discrete manual evaluation scores.
the techniques discussed in the previous section cannot be applied, as a certain amount of variation inside each level would be required in order to measure the correlation between metric and human scores.
contrasting
train_7321
The annotations in this data set were collected using Amazon Mechanical Turk, which often raises questions about the reliability of the data.
as described in Graham, Mathur, and Baldwin (2015), a rigorous quality control was conducted in order to filter out Average quality score and standard deviation of the scores assigned to the same sentences by different annotators.
contrasting
train_7322
As mentioned in Section 2, some work has been done comparing MT evaluation metrics for statistical systems versus rule-based systems.
the behavior of metrics when it comes to neural MT (NMT), the new paradigm in MT research and development, has not yet been inspected.
contrasting
train_7323
In order to isolate the impact of the type of human judgments on the meta-evaluation results, in the following analysis we use the data sets that belong to the news domain (with the exception of the WMT17 Quality Estimation data set), have English as the target language, and have the outputs of MT systems based on statistical approach (with the exception of the WMT16 data sets that contain different types of MT systems).
other factors, such as the average level of MT quality or the reliability of manual evaluation scores, are more difficult to control.
contrasting
train_7324
Automatic MT evaluation remains a prominent field of research, with various evaluation methods proposed every year.
studies describing the weaknesses and strengths of the existing approaches are very rare.
contrasting
train_7325
Typological documentation is limited by the fact that the evidence available for each language is highly unbalanced and many languages are not even recorded in a written form.
3 large typological databases such as WALS (Dryer and Haspelmath 2013) nevertheless have an impressive coverage (syntactic features for up to 1,519 languages).
contrasting
train_7326
Still, the language vectors belonging to the same cluster display some microvariations in individual features.
5(a) shows clusters differing from language genealogy: for instance, English and Czech are merged, although they belong to different genera (Germanic and Slavic).
contrasting
train_7327
With the exception of Naseem, Barzilay, and Globerson (2012), who treated typological information as a latent variable, automatically acquired typological features have not been integrated into algorithms for NLP applications to date.
they have several advantages over manually crafted features.
contrasting
train_7328
As shown in § 4.2, most of the typology-savvy algorithms thus far exploited features extracted from manually crafted databases.
this approach is riddled with several shortcomings, which are reflected in the small performance improvements observed in § 5.4.
contrasting
train_7329
As a consequence, these approaches typically select a limited subset of features from a single data set (WALS) and focus on a single aspect of variation (typically word order).
typological databases also cover other important features, related to predicate-argument structure (ValPaL), phonology (LAPSyD, PHOIBLE, StressTyp2), and lexical semantics (IDS, ASJP), which are currently largely neglected by the multilingual NLP community.
contrasting
train_7330
As a consequence, it does not fit well with the gradient and contextual models of machine learning.
typological databases are originally created from raw linguistic data.
contrasting
train_7331
In this experiment, we know beforehand that there is a relationship between words, and our aim is to identify the type of relationship.
in many situations this kind of a priori information is not available.
contrasting
train_7332
These rules are used to augment the set of target terms iteratively from a seed collection.
these approaches highly rely on hand-coded rules and are inflexible.
contrasting
train_7333
These works indicate the importance of invariant syntactic information between words as a bridge between different domains.
their drawback is reflected by restraining the pivot information within a few manually defined rules that are fixed and inflexible.
contrasting
train_7334
The update process involves a gradient reversal layer for the domain predictor that results in the following updates: The DAN as defined here assumes similar data distributions between the source and target domains in general.
in many cases, the input data do not commit a universal but multimodal distributions.
contrasting
train_7335
For example, which weather parameters (temperature, wind chill) are influential versus what body parameters (heart rate, body temperature) are anomalous is heavily dependent on the domain at hand, such as weather or healthcare, respectively.
the surface realization part of language generation may not be as domain-dependent and can, thus, be designed in a reusable and scalable way.
contrasting
train_7336
The system is primarily designed for taking a structured table with variable schema as input and producing a coherent paragraph description pertaining to the facts in the table.
it can also work with other structured data formats such as graphs and key-value pairs (in the form of JSONs) as input.
contrasting
train_7337
We observe that the endto-end WEBNLGMODEL does better than WIKIBIOMODEL.
our proposed system clearly gives the best performance, demonstrating the capability of generalizing in unseen domains and structured data in a more complex form such as a multi-row and multi-column table.
contrasting
train_7338
As we can see, all systems are significantly involved in producing the correct output in the test data.
the TRIPLE2TEXT system is selected a fewer number of times than the other two systems.
contrasting
train_7339
2017), and set-to-sequence generation (Vinyals, Bengio, and Kudlur 2016) can also act as building blocks for generation from structured data.
none of these works consider the morphologic and linguistic variations of output words as we consider for simple language generation.
contrasting
train_7340
In some cases it is sufficient to know merely the range of argumentative types used in order to grade student essays (Ong, Litman, and Brusilovsky 2014), to know what stance an essay takes toward a proposition in order to check that it provides appropriate evidence to back-up its stance (Persing and Ng 2015), or whether a claim is verifiable in order to flag these in online discussions (Park and Cardie 2014).
if the goal is to reconstruct enthymemes (Razuvayevskaya and Teufel 2017) (see also the discussion of Feng and Hirst [2011] in Section 8.2) or ask critical questions about support relations, we also need to extract the nature of the argumentation schemes being used.
contrasting
train_7341
Although formally structured dialogues can be captured and exploited in this way, many real world dialogues follow only very limited rules and the challenge of identifying the argumentative structure in free form discussion is complex.
even very informal dialogues nevertheless provide additional data beyond that available in monologue, which can be used to help constrain the task.
contrasting
train_7342
The model integrates the two-level formalism and a unification-based formalism.
to other works, we propose to separate the treatment of sequential and non-sequential morphotactic constraints.
contrasting
train_7343
General dictionaries testify all possible senses of a given word; typical word collocates acquired from dictionaries tend to cover the entire range of possible senses of a headword.
unrestricted texts re ect actual usage and possibly bear witness to senses which are relevant to a speci c domain only.
contrasting
train_7344
We w ere able to extract all SI's relative to the entire K B .
we report here an intrinsic evaluation of the accuracy of acquired centroids which i n volves only a small subset of our results, since provision of a reference class typology is extremely labour intensive.
contrasting
train_7345
In multilingual application development within Microsoft research, grammar sharing has extensively been exploited ± [Pin96], [GLPR97].
all these approaches are rather opportunistic in the sense that existing grammatical descriptions based on existing grammar models were explored.
contrasting
train_7346
Fergus could easily be augmented with a preprocessor that maps a semantic representation to our syntactic input; this is not the focus of our research.
there are two more important di erences.
contrasting
train_7347
1994;Tugwell 1995), but these approaches have remained context-free in their generative power.
lexical-Functional Grammar (Kaplan & Bresnan 1982) is known to be beyond context-free.
contrasting
train_7348
We will assess their model on the LFG-annotated Verbmobil and Homecentre corpora in section 3 of this paper.
we will also assess an alternative definition of fragment probability which is a refinement of Bod & Kaplan's model.
contrasting
train_7349
In this paper we will use Bod's subtree-to-rule conversion method for studying the behavior of probabilistic against non-probabilistic DOP for different maximum subtree sizes.
we will not use Bod's Monte Carlo sampling technique from complete derivation forests, as this turns out to be computationally impractical for our larger corpora.
contrasting
train_7350
an SCFG) which annotates the nonterminals with headwords.
such a headlexicalized stochastic grammar does not capture dependencies between nonheadwords (such as more and than in the WSJ construction carry more people than cargo where neither more nor than are headwords of the NP-constituent more people than cargo) , whereas a frontier-lexicalized DOP model using large subtrees does capture these dependencies since it includes subtrees in which e.g.
contrasting
train_7351
In essence, we have a robust text analysis system for identification of proper names and technical terms, since these are most likely to carry the bulk of the semantic load in a document.
in addition to simple identification of certain phrasal types, capabilities also exist for identifying their variants (contractions, abbreviations, colloquial uses, etc.)
contrasting
train_7352
We focus on some strategies for incorporating segmentation results in the summary generation process.
unlike (Kan et al., 1998) (whose work also seeks to leverage linear segmentation for the explicit purposes of document summarization), we further take the view that with an appropriate interface metaphor-where the user has an overview of the relationships between a summary sentence, the key salient phrases within it, and its enclosing discourse segment-a sequence of visually demarkated segments can impart a lot of information directly leading to in-depth perception of the summary, as it relates to the full document (Boguraev and Neff, 2000).
contrasting
train_7353
In a sense, the apparatus of restrictions allows to represent UWs as disambiguated English words.
restrictions allow to denote concepts which are absent in English.
contrasting
train_7354
come(met>ship) is interpreted as 'come and the method of coming is a ship'.
here is an example of a UNL expression for the sentence (2) language differences are a barrier to the smooth flow of information in our society.
contrasting
train_7355
In the lexicon we used, nouns are, on average, members of 2 semantic classes.
the semantic classes are ordered so that the most typical use comes first.
contrasting
train_7356
1 See Annex for examples and the definition of binding constraints.
to this, however, the formal and computational handling of binding constraints has presented non-negligible resistance when it comes to their integration into the representation and processing of grammatical knowledge.
contrasting
train_7357
anaphoric definite descriptions, ruled by Principle C) may be assigned the respective equation.
these difficulties turn out to be solved, the LFG variant of the coindexation methodology presents the same type of problems of Johnson's approach.
contrasting
train_7358
This is detectable by instrumentation, as discussed in Sec.4.1.
once there is a testsuite, it has to be used economically, avoiding redundant tests.
contrasting
train_7359
We do not need criteria for tree decomposition and category specialization, and we can use the standard parsing algorithm.
the e ciency gains are not as big as those reported by Rayner and Carter, 1996 but note that we cannot measure parsing times alone, so we need to compare to their speed-up factor of 10.
contrasting
train_7360
95), an intelligent dictionary lookup which achieves some word sense disambiguation using word context (part-of speech and multiword expressions (MWEs) 6 recognition).
locolex choices remain purely syntactic.
contrasting
train_7361
Our methods are close to those used for positioning unknown words in thesauri.
the two issues can be di erentiated with respect to the manipulated data.
contrasting
train_7362
Our experiments show that exogeneous categorization is noticeably the most e cient of both approaches.
it requires much more knowledge sources and computational overhead.
contrasting
train_7363
For example, the name of a country is usually not mentioned in a news article reporting an event that happened in that country.
the country name is important in foreign news.
contrasting
train_7364
The voting strategy from reporters gives a shorter summarization in terms of user-preferred languages.
it also misses some unique information reported only by one site.
contrasting
train_7365
The syntactic and semantic categories of unknown words in principle can be determined by their content and contextual information.
many difficult problems have to be solved.
contrasting
train_7366
The step 4 thus achieves the resolution of both semantic ambiguities of the head and the modifier.
only the category of the head is our target of resolution.
contrasting
train_7367
Neighborhoods are quite successful in guessing the second half of such lists.
there were a few big losers, e.g., articles that summarize the major stories of the day, week and year.
contrasting
train_7368
For example, for the text in Figure 1, whose rhetorical structure is shown in Figure 2, the head of span [5,7] is unit 5 because the head of the immediate nucleus, the elementary unit 5, is 5.
the head of span [6,7] is the list h6,7i because both immediate children are nuclei of a multinuclear relation.
contrasting
train_7369
The antecedent Genetic Therapy, Inc. appears in unit 1; therefore, using VT we search back 2 units (units 8 and 1) to find a correct antecedent.
to resolve the same reference using a linear model, four units (units 8, 7, 6, and 5) must be examined before Genetic Therapy is found.
contrasting
train_7370
In this case, we consider that the effort is equal to k. As a consequence, for small ks the effort required to establish co-referential links is similar for both theories, because both can establish only a limited number of links.
as k increases, the effort computed over the entire corpus diverges dramatically: using the Discourse-VT model, the search space for co-referential links is reduced by about 800 units for a corpus containing roughly 1200 referential expressions.
contrasting
train_7371
If one treats all discourse units in the preceding discourse equally, the increase is statistically significant only when a discourse-based coreference system looks back at most four discourse units in order to establish co-referential links.
if one assumes that proximity plays an important role in establishing coreferential links and that referential expressions are more likely to be linked to referees that were used recently in discourse, the increase is statistically significant no matter how many units a discourse-based co-reference system looks back in order to establish co-referential links.
contrasting
train_7372
A similar ambiguity is exhibited by (25): (i) there was no such (single) person that would give P. a book and M. flowers, (ii) P. did not get a book and M. did not get flowers.
there is no such ambiguity in (26 'To Peter, nobody gave a book and to Mary, flowers.'
contrasting
train_7373
Whereas (Gundel et al., 1993) do not attempt to make their focus notion operationalizable, this has been attempted by further developments of centering.
these have mostly been applied to the pronoun resolution problem.
contrasting
train_7374
The left-corner grammar transform converts a left-recursive grammar into a non-left-recursive one: a top-down parser using a left-corner transformed grammar simulates a left-corner parser using the original grammar Rosenkrantz and Lewis II, 1970;Aho and Ullman, 1972.
the left-corner transformed grammar can be signi cantly larger than the original grammar, causing numerous problems.
contrasting
train_7375
First, note that LC P G, the result of applying the standard left-corner grammar transform to G, has approximately 20 times the number of productions that G has.
lC td;lc l0 G, the result of applying the selective left-corner grammar transformation with factorization, has approximately 1.4 times the number of productions that G has.
contrasting
train_7376
For example, in this paper we assumed that one would always choose a left-corner production set that includes the minimal set L 0 required to ensure that the transformed grammar is not left-recursive.
roark and Johnson 1999 report good performance from a stochastically-guided top-down parser, suggesting that left-recursion is not always fatal.
contrasting
train_7377
more recursive semantic phenomena such as possessives and other complex noun phrases are added to the grammar the resulting machines become larger.
the computational consequences of this can be lessened by lazy evaluation techniques (Mohri, 1997) and we believe that this finitestate approach to constructing semantic representations is viable for a broad range of sophisticated language interface tasks.
contrasting
train_7378
Some mixed-script entries could be handled as syntactic compounds, for example, ID [ai dii kaado="ID card"] could be derived from ID NOUN + NOUN .
many such items are preferably treated as lexical entries because In addition, many Japanese verbs and adjectives (and words derived from them) have a variety of accepted spellings associated with okurigana, optional characters representing inflectional endings.
contrasting
train_7379
And ∆ n,h , the distance between the two words, is widely used, because this attribute is believed to strongly affect whether those two words are going to be related.
in the statistical model proposed in this paper, P (n → h) depends not only on the attributes of the tree M, but also on alternative trees in the parse forest generated by the grammar.
contrasting
train_7380
They report that this kind of contextual information improves accuracy.
the model has to assume the independency of all the random variables, which may cause some errors.
contrasting
train_7381
Some research institutes have constructed Japanese case frame dictionaries manually (Ikehara et al., 1997;Information-Technology Promotion Agency, Japan, 1987).
it is quite expensive, or almost impossible to construct a wide-coverage case frame dictionary by hand.
contrasting
train_7382
A noun modified by a clause is usually a case component for the verb of the modifying clause.
there is no case-marker for their relation.
contrasting
train_7383
For example, the case frame of the verb naru 'become' differs depending on its ni (dative) case as follows: with become a friend In most cases, the main case components are placed just in front of the light verbs so that the automatic parser can detect their relations As shown in Table 1, KNP detects heads of case components in fairly high accuracy.
in order to collect much reliable data, we discarded modifier-head relations in the automatically parsed corpora in the following cases: • When CMs of case components disappear because of topic markers or others.
contrasting
train_7384
Based on the conditions above, case components of each verb are collected from the parsed corpora, and the collected data are considered as case frames of verbs.
if the frequency of a CM is very low compared to other CMs, it might be collected because of parse errors.
contrasting
train_7385
We can thus expect to have identical SPL expressions for Bulgarian, Czech and Russian in many cases, although these may be realized by diverging syntactic structures.
we also allow for the case in which there is no commonality at this level and even the SPL expressions diverge.
contrasting
train_7386
Whether this is true remains an open problem.
the previous intuition seems to hold anyway when the two analogies to be concatenated do not have any symbol in common.
contrasting
train_7387
For instance, ay : az = by : x is not acceptable when x = zb.
the three other possible analogies meet intuition, so that the following hypothesis may be laid.
contrasting
train_7388
In the tagging phase, instead of using (4)-(6), the input can be constructed simply as ipt t0i = g t0i 1 OPT(0i); 10where i = 1 ; 1 11 ; l , and OPT(0i) means the output of the tagger for the ith word before the target word.
in the training process, the output of the tagger is not always correct and cannot be fed back to the inputs directly.
contrasting
train_7389
This process is repeated until N (w t ) 6 = U nknown or (l;r) = ( 0 ; 0).
to make the same set of connection weights of the neuro tagger with the largest (l;r) a v ailable as much a s possible when using short inputs for tagging, in training phase the neuro tagger is regarded as a neural network that has gradually grown from small one.
contrasting
train_7390
In the last decade, members of the computational linguistics community have adopted a perspective on discourse based primarily on either Rhetorical Structure Theory or Grosz and Sidner's Theory.
only recently, researchers have started to investigate the relationship between the two perspectives.
contrasting
train_7391
Hence, axiom (8) explicates the relationship between the structure of discourse and intentional dominance.
axiom (9) explicates the relationship between intentional dominance and discourse structure.
contrasting
train_7392
One basic rule for MWTU representation is that an MWTU is composed of only lexical morphemes if possible, that is, grammatical morphemes such as particles and the endings of a word will be extracted in the representation because of the above characteristics which are freely inserted and omitted.
grammatical morphemes affecting the meanings of MWTUs must be described.
contrasting
train_7393
This factor was preserved under 5 and 500 repetitions of the same parse.
speed was not the main issue in developing this setup, but rather simplicity and ease of implementation.
contrasting
train_7394
In such cases, we must add new features to the analysis to create a situation in which many category-exclusive rules can be applied.
it is not sucient to use categoryexclusive rules.
contrasting
train_7395
When multiple rules have the same probability and similarity, the method takes the examples used by the rules having the highest probability and the highest similarity, and chooses the category with the larger number of examples as the desired answer, in the same way as in Method 1.
when category-exclusive rules having more than one frequency exist, the above procedure is performed after eliminating all of the categoryexclusive rules having one frequency.
contrasting
train_7396
They are similar to MID-3D in that they use planning mechanisms in content planning.
in presentation systems, unlike dialogue systems, the user just watches the presentation without changing her his view.
contrasting
train_7397
As a result, two Main-Acts looking at the user and requesting to try to do the action and two Subsidiary-Acts showing how to do the action, then resetting the state are set as subgoals and returned to the DM.
if the object is not visible to the user, hOperator 2i is selected.
contrasting
train_7398
This task, which we will call text structuring, i s t ypically addressed through a micro-planning phase that determines the content of successive sentences.
documents of realistic complexity require richer TSs including, for example, vertical lists, sub-sections, and clauses separated by semi-colons.
contrasting
train_7399
The converse does not hold: for instance, an RS of the form R 1 (R 2 (p 1 p 2 ) p 3 ) can be realized by a paragraph of three sentences, one for each proposition, even though this TS contains no node dominating the propositions (p 1 and p 2 ) that are grouped by R 2 .
when this happens, the propositions grouped together in the RS must remain consecutive in the TS solutions in which p 3 comes inbetween p 1 and p 2 are prohibited.
contrasting