id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_11800
However, with respect to entrenchment (Pierrehumbert, 2001;Bybee, 2006), i.e.
the idea that frequently occurring behaviours undergo processes of entrenchment, in Experiment 1 one might expect to see greater similarity in the realisations of L * H. it is important to note that while tokens of L * H are not particularly similar to each other (the bulk of the distribution is around zero (see figure 1)), they are not too dissimilar either.
contrasting
train_11801
ABIR, compared to the image-only approach of CBIR, offers a practical advantage in that queries can be more naturally specified by a human user (Inoue, 2004).
manually annotating biomedical images is a laborious and subjective task that often leads to noisy results.
contrasting
train_11802
In the NUMBERS system, information is only allowed to flow from left to right, which means that the LB may be regarded as the input buffer and the RB as the output buffer.
in the general model, information may flow in both directions.
contrasting
train_11803
Like them, we rely on the presence or absence of cohesive links between the words in a text.
unlike Hirst and St-Onge we do not require a hand-crafted resource like WordNet or Roget's Thesaurus; our approach is knowledgelean.
contrasting
train_11804
At first glance, this seems to be quite natural.
when we compared this alternative method with the aforementioned approximation on search steps, we found that it worked worse than the latter, in terms of performance and speed.
contrasting
train_11805
NETE discovery from comparable corpora using time series and transliteration model was proposed in (Klementiev and Roth, 2006), and extended for NETE mining for several languages in (Saravanan and Kumaran, 2007).
such methods miss vast majority of the NETEs due to their dependency on frequency signatures.
contrasting
train_11806
For example, a template shown in Section 5, A MAY WEAR A CRASH HELMET, was supported by just two sentences in the BNC.
based on those two observations we were able to conclude that usually If something wears a crash helmet, it is probably a male person.
contrasting
train_11807
In initial experiments, we used relative frequencies.
for instance, the trigram filter would allow any tuple g, r i−2 , r i−1 , r i for some constant threshold τ , provided: we found that filters are more effective (and require much less space -see below), which simply require that every step has been observed often enough in the training data: In particular, the case where τ = 0 gave surprisingly good results.
contrasting
train_11808
On the one hand, the Alpino Treebank might not be a reliable test set for the Alpino parser, because it has been used quite intensively during the development of various components of the system.
we might regard the experiments in the previous section as development experiments from which we learn the best parameters of the approach.
contrasting
train_11809
They report important efficiency gains (the parser is about three times faster), coupled with a mild reduction of coverage (5% loss).
to our approach in which no manual annotation is required, Rayner and Carter (1996) report that for each sentence in the training data, the best parse was selected manually from the set of parses generated by the parser.
contrasting
train_11810
However, only 64 of the 100 test items in the McRae data set contain verbs that are attested in the FrameNet corpus, 8 of which involve an unattested verb sense.
the only requirement for the exemplar-based model to be able to make its predictions is that the similarities between the verbs and the nouns in the target exemplars and the memory exemplars can be computed.
contrasting
train_11811
Their approach hinges on the fact that similes exploit stereotypes to draw out the salient properties of a target, thereby allowing rich descriptions of those stereotypes to be easily acquired, e.g., that snowflakes are pure and unique, acrobats are agile and nimble, knifes are sharp and dangerous, viruses are malicious and infectious, and so on.
because they find that almost 15% of their web-harvested sim-iles are ironic (e.g., "as subtle as a rock", "as bulletproof as a sponge-cake", etc.
contrasting
train_11812
), they filter irony from these associations by hand, to yield a sizable database of stereotypical attributions that describes over 6000 noun concepts in terms of over 2000 adjectival properties.
because Veale and Hao's data directly maps stereotypical properties to simile vehicles, it does not provide a parent category for these vehicles.
contrasting
train_11813
Both the WordNet and ConceptNet seeds achieve comparable accuracies of 68% and 67% respectively after 5 cycles of bootstrapping, which compares well with the accuracy of 62.7% achieved by Poesio and Almuhareb.
the simile seed clearly yields the best accuracy of 84.3%, which also exceeds the accuracy of 66.4% achieved by Poesio and Almuhareb when using both values and attributes (such as Temperature, Color, etc.)
contrasting
train_11814
Their method is simple but effective.
the features used in this method are only suitable for parallel corpora as the measurement is mainly based on structural similarity.
contrasting
train_11815
Supervised methods such as Support Vector Machine (SVM) and Maximum Entropy (ME) estimate the weight of each feature based on training data which are then used to calculate the final score.
these supervised learning-based methods may not be applicable to our proposed issue as we are motivated to build a language independent unsupervised system.
contrasting
train_11816
An edge is penalised if it is improbable that the head takes on yet another modifier, say in the example of an attachment to a preposition whose argument position has already been filled.
accounting for argument positions makes an edge weight dynamic and dependent on surrounding tree context.
contrasting
train_11817
Should an approach chance upon an alternative grammatical ordering, it would penalised.
all algorithms and baselines compared would suffer equally in this respect, and so this will be less problematic when averaging across multiple test cases.
contrasting
train_11818
The margin narrows between the CLE algorithm and the LMO baseline.
the AB algorithm still out-performs all other approaches by 7 BLEU points, highlighting the benefit in modelling dependency relations.
contrasting
train_11819
The results for the two systems are very similar since they use same kinds of features.
with Markov logic, it is easy to add predicates and formulas to allow joint inference.
contrasting
train_11820
Increasing the expressiveness of the argument key representation by flagging intransitive constructions would distinguish that pair of arguments.
we keep this particular representation, in part to compare with the previous work.
contrasting
train_11821
While RTE is outside our present scope, we do focus on QP entailment as Natural Logic does.
our evaluation differs from Chambers et al.
contrasting
train_11822
We feed each potential entailment pair to SVM by concatenating the two vectors representing the antecedent and consequent expressions.
2 for efficiency and to mitigate data sparseness, we reduce the dimensionality of the semantic vectors to 300 columns using Singular Value Decomposition (SVD) before feeding them to the classifier.
contrasting
train_11823
This success is especially impressive given our challenging training and testing regimes.
to the first study, now SVM AN |= N , the classifier trained on the AN |= N data set, and balAPinc perform no better than the baselines.
contrasting
train_11824
A major focus of current work in distributional models of semantics is to construct phrase representations compositionally from word representations.
the syntactic contexts which are modelled are usually severely limited, a fact which is reflected in the lexical-level WSD-like evaluation methods used.
contrasting
train_11825
One uses the WordSim-353 dataset (Finkelstein et al., 2002), which contains human word pair similarity judgments that semantic models should reproduce.
the word pairs are given without context, and homography is unaddressed.
contrasting
train_11826
Parsing accuracy has been used as a preliminary evaluation of semantic models that produce syntactic structure (Socher et al., 2010;Wu and Schuler, 2011).
syntax does not always reflect semantic content, and we are specifically interested in supporting syntactic invariance when doing semantic inference.
contrasting
train_11827
there are apparently verbless sentences in Hungarian: A ház nagy (the house big) "The house is big".
in other tenses or moods, the copula is present as in A ház nagy lesz (the house big will.be) "The house will be big".
contrasting
train_11828
In Hungarian, they are ambiguous between being adverbs and conjunctions and it is mostly their conjunctive uses which are problematic from the viewpoint of parsing.
these words have an important role in marking the information structure of the sentence: they are usually attached to the element in focus position, and if there is no focus, they are attached to the verb.
contrasting
train_11829
On the other hand, these words have an important role in marking the information structure of the sentence: they are usually attached to the element in focus position, and if there is no focus, they are attached to the verb.
sentences with or without focus can have similar word order but their stress pattern is different.
contrasting
train_11830
', where it is the subject that follows the verb for stylistic reasons.
in Hungarian, morphological information is of help in such sentences, as it is not the position relative to the verb but the case suffix that determines the grammatical role of the noun.
contrasting
train_11831
locative adverbs were labeled as ADV/MODE.
the frequency rate of this error type is much higher in English than in Hungarian, which may be related to the fact that in the English corpus, there is a much more balanced distribution of adverbial labels than in the Hungarian one (where the categories MODE and TLOCY are responsible for 90% of the occurrences).
contrasting
train_11832
Machine learning drives the process of deciding among alternative candidate splits, i.e., feature information can draw on full structural information for the entire material in the span under consideration.
due to the dynamic programming approach, the features cannot use arbitrarily complex structural configurations: otherwise the dynamic programming chart would have to be split into exponentially many special states.
contrasting
train_11833
As such the approach is suitable to detect paraphrases that describe the relation between two entities in documents.
the paper does not describe how the mined paraphrases can be linked to questions, and which paraphrase is suitable to answer which question type.
contrasting
train_11834
(Shen and Klakow, 2006) also describe a method that is primarily based on similarity scores between dependency relation pairs.
their algorithm computes the similarity of paths between key phrases, not between words.
contrasting
train_11835
For paragraph retrieval we use the same approach as for evaluation set 1, see Section 7.1.
in more than 20% of the cases, this method returns not a single paragraph that contains both the answer and at least one question keyword.
contrasting
train_11836
Normally, these pieces of information (i.e., nuggets) explain different facets of the definiendum (e.g., "ballet choreographer" and "born in Bordeaux"), and the main idea consists in projecting the acquired nuggets into the set of answer candidates afterwards.
the performance of this category of method falls into sharp decline whenever few or no coverage is found across KBs (Zhang et al., 2005;Han et al., 2006).
contrasting
train_11837
On the one hand, we want our model to produce a "good" translation (well-formed and transmitting the information contained in the source query) of an input query.
we want to obtain good retrieval performance using the proposed translation.
contrasting
train_11838
One of the reasons being the poor diversity of the Nbest list of the translations.
we be-lieve that this approach has more potential in the context of query translation.
contrasting
train_11839
Ideally, we would have liked to combine the two approaches we proposed: use the querygenre-tuned model to produce the Nbest list which is then reranked to optimize the MAP score.
it was not possible in our experimental settings due to the small amount of training data available.
contrasting
train_11840
Combining monolingually estimated reordering and phrase table features (M/M) yields a total gain of 13.5 BLEU points, or over 75% of the BLEU score loss that occurred when we dropped all features from the phrase table.
these results use "monolingual" corpora which have practically identical phrasal and temporal distributions.
contrasting
train_11841
This is even enforced due to the availability of toolboxes such as Moses (Koehn et al., 2007) which make it possible to build translation engines within days or even hours for any language pair provided that appropriate training data is available.
this reliance on training data is also the most severe limitation of statistical approaches.
contrasting
train_11842
A general condition for the pivot approach is to assume independent training sets for both translation models as already pointed out by (Bertoldi et al., 2008).
to research presented in related work (see, for example, (Koehn et al., 2009)) this condition is met in our setup in which all data sets represent different samples over the languages considered (see section 4).
contrasting
train_11843
The reason for this might be that the IBM models can handle noise in the training data more robustly.
in terms of unknown words, WFST-based alignment is very competitive and often the best choice (but not much different from the best IBM based models).
contrasting
train_11844
Rescoring of N-best lists, on the other hand, does not have a big impact on our results.
we did not spend time optimizing the parameters of N-best size and interpolation weight.
contrasting
train_11845
For the translation into English, the in-domain language model helps a little bit (similar resources are not available for the other direction).
having the strong indomain model for translating to (and from) the pivot language improves the scores dramatically.
contrasting
train_11846
Both techniques span through two orthogonal criteria when selecting bilingual sentences from the available pool: avoiding to introduce a bias in the original data distribution, and increasing the informativeness of the corpus.
we prove that among all possible subsets from the sentence pool, there is at least a small one that yields large improvements (up to 10 BLEU points) with respect to a system trained with all the data.
contrasting
train_11847
The same partitions as in the IWSLT2010 evaluation task (Paul et al., 2010) effective technique that is commonly used is to reproduce out-of-vocabulary words from the source sentence in the target hypothesis.
invariable n-grams are usually infrequent as well, which implies that the infrequent n-grams technique would select sentences containing such ngrams, even though they do not provide further information.
contrasting
train_11848
This technique could be also applied to promote the performance of the system built by means of BSS.
this is left out as future work.
contrasting
train_11849
As they do not specify which data they used for their held-out test set, we cannot perform a direct comparison.
our feature set is nearly a superset of their best feature set, and their result lies well within the range of results seen in our cross-validation folds.
contrasting
train_11850
The motivations behind that are: • Since the named entities have a tree structure, it is reasonable to use a solution coming from syntactic parsing.
preliminary experiments using such approaches gave poor results.
contrasting
train_11851
Since we are dealing with noisy data, the hardest part of the task is indeed to annotate components on words.
since entity trees are relatively simple, at least much simpler than syntactic trees, once entity components have been annotated in a first step, for the second step, a complex model is not required, which would also make the processing slower.
contrasting
train_11852
Intuitively, this representation is effective since entities annotated directly on words provide also the entity of the parent node.
this representation increases drastically the number of entities, in particular the number of components, which in our case are the set of labels to be learned by the CRF model.
contrasting
train_11853
Our task is similar to task A and C of TempEval-1 (Verhagen et al., 2007) in the sense that we attempt to identify temporal relation between events and time expressions or document dates.
we do not use a restricted set of events, but focus primarily on a single temporal relation t link instead of named relations like BE-FORE, AFTER or OVERLAP (although we show that we can incorporate these as well).
contrasting
train_11854
We plot the Cumulative distribution of frequency (CDF) of the ranks (as percentages in the mixed pools) of false negatives in figure 3.
we took similar steps for the spurious ones (false positives) and plot them in figure 3 as well (they are ranked by model-predicted probabilities of being negative).
contrasting
train_11855
That means the missing examples are lexically, structurally or semantically similar to correct examples, and are distinguishable from the true negative examples.
the distribution of false positives (spurious examples) is close to uniform (flat curve), which means they are generally indistinguishable from the correct examples.
contrasting
train_11856
It relies on the assumption that the annotators annotated all relation mentions and missed no (or very few) examples.
this is not true for training on a single-pass annotation, in which a significant portion of relation mentions are left not annotated.
contrasting
train_11857
Because of this, we will learn a model that can predict approximately good summaries y i from x i .
we believe that most of the score difference between manual summaries and y i (as explored in the experiments section) is due to it being an extractive summary and not due to greedy construction.
contrasting
train_11858
On one hand, this means that one can train MT systems on S → T data only, at the expense of only a minor loss in quality.
it is obvious that the T → S component also contributes to translation quality.
contrasting
train_11859
Source Cependant, je pense qu'il est prématuré de le faire actuellement,étant donné que le ministre a lancé cette tournée.
baseline I think it is premature to the right now, since the minister launched this tour.
contrasting
train_11860
Berend (2011) performs a form of pro/con summarization that does not rely on aspects.
most of the problems of aspect-based pro/con summarization also apply to this paper: no differentiation between good and bad reasons, the need for human labels to train a classifier, and inferior readability compared to a well-formed sentence.
contrasting
train_11861
The most common use of crowdsourcing in NLP is to have workers label a training set and then train a supervised classifier on this training set.
we use crowdsourcing to directly evaluate the relative quality of the automatic summaries generated by the unsupervised method we propose.
contrasting
train_11862
We are not assuming that all important words in the supporting sentence are nominal; the verb will be needed in many cases to accurately convey the reason for the sentiment expressed.
it is a fairly safe assumption that part of the information is conveyed using noun phrases since it is difficult to convey specific information without using specific noun phrases.
contrasting
train_11863
It is comparable in size to only two known lexicons: WORDNET-AFFECT (Strapparava and Valitutti, 2004) and EMOLEX (Mohammad and Turney, 2010).
to the development of these lexicons, we do not restrict our annotators to a particular set of emotions.
contrasting
train_11864
In particular, B&L associated a lower rank with automatically created permutations of a source document, and learned a model to discriminate an original text from its permutations (see Section 3.1 below).
coherence is matter of degree rather than a binary distinction, so a model based only on such pairwise rankings is insufficiently fine-grained and cannot capture the subtle differences in coherence between the permuted documents.
contrasting
train_11865
Their extended model significantly improved ranking accuracies on the same two datasets used by Barzilay and Lapata (2008) as well as on the Wall Street Journal corpus.
while enriching or modifying the original features used in the standard model is certainly a direction for refinement of the model, it usually requires more training data or a more sophisticated feature representation.
contrasting
train_11866
We implement our multiple-rank model with full coreference resolution using Ng and Cardie's coreference resolution system, and entity extraction approach as described above -the Coref-erence+ condition.
as argued by Elsner and Charniak (2011), to better simulate the real situations that human readers might encounter in machine-generated documents, such oracular information should not be taken into account.
contrasting
train_11867
Rather than rely on an ad-hoc summation of PMIs, they apply language modeling techniques (specifically, a smoothed 5-gram model) over the sequence of events in the collected chains.
they only tested these language models on sequencing tasks (e.g.
contrasting
train_11868
Third, we have discussed why Recall@N is a better and more consistent evaluation metric than Average rank.
both evaluation metrics suffer from the strictness of the narrative cloze test, which accepts only one event being the correct event, while it is sometimes very difficult, even for humans, to predict the missing events, and sometimes more solutions are possible and equally correct.
contrasting
train_11869
We think such an approach is useful because it forms a natu-ral baseline for the task (as it does in many other tasks such as named entity tagging and language modeling).
story structure is seldom strictly linear, and future work should consider models based on grammatical or discourse links that can capture the more complex nature of script events and story structure.
contrasting
train_11870
An AUC curve that is at least as good as a second curve at all points, is said to dominate it and indicates that the first classifier is equal or better than the second for all plotted values of the parameters, and all cost ratios.
aUC being greater for one classifier than another does not have such a property -indeed deconvexities within or intersections of ROC curves are both prima facie evidence that fusion of the parameterized classifiers will be useful (cf.
contrasting
train_11871
Uebersax (1987), Hutchison (1993) and Bonnet and Price (2005) each compare Kappa and Correlation and conclude that there does not seem to be any situation where Kappa would be preferable to Correlation.
all the Kappa and Correlation variants considered were symmetric, and it is thus interesting to consider the separate regression coefficients underlying it that represent the Powers Kappa duals of Informedness and Markedness, which have the advantage of separating out the influences of Prevalence and Bias (which then allows macroaveraging, which is not admissable for any symmetric form of Correlation or Kappa, as we will discuss shortly).
contrasting
train_11872
The other Kappa and Correlations are more complex (note the demoninators in Eqns 5-9) and how they might be meaningfully macro-averaged is an open question.
microaveraging can always be done quickly and easily by simply summing all the contingency tables (the true contingency tables are tables of counts, not probabilities, as shown in Table 1).
contrasting
train_11873
It is very helpful to build an automatic system to suggest latest information a user would be interested in.
unlike formal news media, user generated content in forums is usually less organized and not well formed.
contrasting
train_11874
Casual forums like "Wow gaming" have much more posts in each thread.
its posts are the shortest in length.
contrasting
train_11875
In some previous work, clustering methods were used to partition users into several groups, Then, predictions were made using information from users in the same group.
in the case of thread recommendation, we found that users' interest does not form clean clusters.
contrasting
train_11876
This way the confidence of "how likely" an item is interesting is preserved.
the downside is that the two different systems have different calibration on its posterior probability, which could be problematic when directly adding them together.
contrasting
train_11877
This observation is consistent with previous work (e.g., (Pavlov et al., 2004)).
we found that in "Fitness Forum", the performance degrades with normalization.
contrasting
train_11878
(red=0 being higher on the ranked list and green being lower) may be interested in a larger variety of topics and thus the user distribution in different topics is not very obvious.
people in the gaming forum are more specific to the topics they are interested in.
contrasting
train_11879
Arguably yes: Today the Acme Pencil Factory celebrated its one-billionth pencil.
such a contrived example is unnatural because unlike birthday, pencil lacks a strong association with celebrate.
contrasting
train_11880
In both sentences the agent of opened, namely Pat, must be capable of opening somethingan informative constraint on Pat.
knowing that the grammatical subject of opened is Pat in the first sentence and the door in the second sentence tells us only that they are nouns.
contrasting
train_11881
We quantify the selectional preference for a relative r to instantiate a relation R of a target t as the probability Pr(r | t, R), estimated as follows.
by the definition of conditional probability: We care only about the relative probability of different r for fixed t and R, so we rewrite it as: We use the chain rule: and notice that t is held constant: We estimate the second factor as follows: We calculate the denominator freq(t) as the number of N-grams in the Google N-gram corpus that contain t, and the numerator freq(t, r) as the number of N-grams containing both t and r. To estimate the factor Pr(R | r, t) directly from a corpus of text labeled with grammatical relations, it would be trivial to count how often a word r bears relation R to target word t. the results would be limited to the words in the corpus, and many relation frequencies would be estimated sparsely or missing altogether; t or r might not even occur.
contrasting
train_11882
To transform Pr(R | r, t) into a form we can estimate, we first apply the definition of conditional probability: To estimate the numerator Pr(R, t, r), we first marginalize over the POS N-gram p: We expand the numerator using the chain rule: Cancelling the common factor yields: We approximate the first term Pr(R | p, t, r) as Pr(R | p), based on the simplifying assumption that R is conditionally independent of t and r, given p. In other words, we assume that given a POS N-gram, the target and relative words t and r give no additional information about the probability of a relation.
their respective positions i and j in the POS N-gram p matter, so we condition the probability on them: As Figure 1 shows, we estimate Pr(R | p, i, j) by abstracting the labeled corpus into POS N-grams.
contrasting
train_11883
(2010) drew (R, t) pairs from each of five frequency bands in the entire British National Corpus (BNC): 50-100 occurrences; 101-200; 201-500; 500-1000; and more than 1000.
we use only half of BNC as our test corpus, so to obtain a comparable test set, we drew 20 (R, t) pairs from each of the corresponding frequency bands in that half: 26-50 occurrences; 51-100; 101-250; 251-500; and more than 500.
contrasting
train_11884
(2010) reported the state-ofthe-art method we used as our EPP baseline.
to prior work that explored various solutions to the generalization problem, we don't so much solve this problem as circumvent it.
contrasting
train_11885
Further, Wiktionary provides relations to other words, e.g., in the form of synonyms, antonyms, hypernyms, hyponyms, holonyms, and meronyms.
to GermaNet, the relations are (mostly) not disambiguated.
contrasting
train_11886
In order to eliminate such noise, manual post-editing is required.
such post-editing is within acceptable limits: it took an experienced research assistant a total of 25 hours to hand-correct all the occurrences of sense-annotated target words and to manually sense-tag any missing target words for the four text types.
contrasting
train_11887
By focusing on web-based data, their work resembles the research described in the present paper.
the underlying harvesting methods differ.
contrasting
train_11888
for a similar approach and results for German texts).
they failed to find an effect for lexicalized surprisal, over and above forward transitional probability.
contrasting
train_11889
The resulting χ 2 statistic indicates the extent to which each surprisal estimate accounts for RT, and can thus serve as a measure of the psychological accuracy of each model.
this kind of analysis assumes that RT for a word reflects processing of only that word, but spill-over effects (in which processing difficulty at word w t shows up in the RT on w t+1 ) have been found in self-paced and natural reading (Just et al., 1982;Rayner, 1998;Rayner and Pollatsek, 1987).
contrasting
train_11890
Both for the lexicalized and unlexicalized versions, these effects persisted whether surprisal for the previous or current word was taken as the independent variable.
the effect size was much larger for previous surprisal, indicating the presence of strong spill-over effects (e.g.
contrasting
train_11891
If particular syntactic categories were contributing to the overall effect of surprisal more than others, including such random slopes would lead to additional variance being explained.
this was not the case: inclusion of by-POS random slopes of surprisal did not lead to a significant improvement in model fit (PSG: χ 2 (1) = 0.86, p = 0.35; RNN: χ 2 (1) = 3.20, p = 0.07).
contrasting
train_11892
This is not surprising, given that while the unlexicalized models only have access to syntactic sources of information, the lexicalized models, like the human parser, can also take into account lexical cooccurrence trends.
when a training corpus is not large enough to accurately capture the latter, it might still be able to model the former, given the higher frequency of occurrence of each possible item (POS vs. word) in the training data.
contrasting
train_11893
8 In terms of peak accuracies, EM gives a slightly better result than the spectral method (80.51% for EM with 15 states versus 79.75% for the spectral method with 9 states).
the spectral algorithm is much faster to train.
contrasting
train_11894
The same dependency pattern might be constructed for multiple (positive or negative) entity pairs.
if it is constructed for both positive and negative pairs, it has to be discarded from the pattern list.
contrasting
train_11895
• Usually, the e-walk features are constructed using dependency types between {governor of X, node X} and {node X, dependent of X}.
we also extract e-walk features from the dependency types between any two dependents and their common governor (i.e.
contrasting
train_11896
One approach is to capture local symmetry of conjuncts.
this approach fails in VP and sentential coordinations, which can easily be detected by a grammatical approach.
contrasting
train_11897
It is therefore natural to think that considering both the syntax and local symmetry of conjuncts would lead to a more accurate analysis.
it is difficult to consider both of them in a dynamic programming algorithm, which has been often used for each of them, because it explodes the computational and implementational complexity.
contrasting
train_11898
Suppose we have an Italian word arcipelago, and we would like to detect its correct English translation (archipelago).
after the TI+Cue method is employed, and even after the symmetrizing re-ranking process from the previous step is used, we still acquire a wrong translation candidate pair (arcipelago, island).
contrasting
train_11899
1997) and Tiger (Brants et al., 2002) corpora, or those that can be extracted from traces such as in the Penn treebank (Marcus et al., 1993) annotation.
the computational complexity is such that until now, the length of sentences needed to be restricted.
contrasting