id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_98700
This test set is suitable for simulating the real process of news production because it is constructed by a Japanese media company.
these parts show that the proposed method achieved comparable scores to ones trained on whole training dataset.
neutral
train_98701
The sentence-level encoder GRU iteratively takes the embeddings of the words in a sentence to update its hidden state, thus its final hidden state is a representation of the sentence.
the decoder GRU takes the current dialogue representation to initialize its hidden state so as to generate a response sentence word by word.
neutral
train_98702
Besides, Table 1 also shows that appropriately scaling up the model brings better performance but consumes more resource, which implies that the simplified HRED will perform better than the baseline HRED when time or memory is limited.
hierarchical Recurrent Encoder-Decoder (hRED) is a conversational model for building end-to-end dialogue systems.
neutral
train_98703
In this work, we focus on positive explanations.
to avoid having too many negative examples in our modified datasets, we only consider sentences that contain at least one candidate.
neutral
train_98704
The results are shown in Figure 3.
span-based training data is more powerful.
neutral
train_98705
We show that a multiple-choice version of Hot-potQA is vulnerable to the same baseline that performs well on WikiHop, showing that this distinction may be important from an evaluation standpoint.
looking at the no-context baseline for comparison, we find that it is only around 2% lower than the two relatively more complex models.
neutral
train_98706
Our combination strategy yields larger gains over the SMT baselines than simpler rescoring or pipelining used in prior work on hybrid systems (Grundkiewicz and Junczys-Dowmunt, 2018).
both SMT and neural sequence-to-sequence models require large amounts of annotated data.
neutral
train_98707
Moreover, the calculation of output o h are restricted to the a single individual subspace, overlooking the richness of contexts and the dependencies among groups of features, which have proven beneficial to the feature learning (Ngiam et al., 2011;Wu and He, 2018).
cNNs, revealing that extracting local features with dynamic weights is superior to assigning fixed parameters.
neutral
train_98708
We fail to replicate the reported results of SGM on AAPD using the authors' codebase and data splits.
(2016) and we are able to achieve good classification results without attention mechanisms.
neutral
train_98709
We also provide the standard deviation of the scores across different seeds to demonstrate the stability of our results.
both of the models surpass the original result by nearly two points for the IMDB dataset.
neutral
train_98710
The highest coverage ratio for each attribute is usually obtained when masking that attribute in the distractor MR (entries on the main diagonal, underlined), in particular for FAMILYFRIENDLY (FF), FOOD, PRICERANGE (PR), and AREA.
the base speaker S 0 model is often underinformative, e.g., for the E2E task failing to mention certain attributes of a MR, even though almost all the training examples incorporate all of them.
neutral
train_98711
a person in a green jacket it surfing while holding on to a line.
our KL-penalty further improves the stochasticity of WAE, as we achieve the highest performance in all diversity measures.
neutral
train_98712
We investigate the utility of sequence-to-sequence models with attention (Bahdanau et al., 2015) to generate concrete realizations of abstract task descriptions.
add a cut amount of cucumber ... Gold put the wrap on a plate .
neutral
train_98713
In contrast, qualitative analysis of the unparameterized transition model show that its alignments learn desirable correspondences (see Figure 2).
the forward variable α i (j) representing p(y 1:j , a j = i|x 1:i ) is recursively as αi(j) = p(yj|i, x1:i, y1:j−1) × i k=1 α k (j − 1)p(aj = i|k, x 1:k , y1:j−1).
neutral
train_98714
We expect all the models to implicitly capture aspects of world knowledge.
most relevant to this paper is the pioneering work of Regneri et al.
neutral
train_98715
One minor exception is that neither predictability nor frequency improves significantly over the other in Dundee.
one major such confound is temporal diffusion (i.e.
neutral
train_98716
fixation duration) (Demberg and Keller, 2008;Frank and Bod, 2011;van Schijndel and Schuler, 2015;Shain et al., 2016).
one major such confound is temporal diffusion (i.e.
neutral
train_98717
Because only a few classifiers are suitable to modify a given noun (again, see Table 2) and the entropy of classifiers for a given noun is predicted to be close to zero, MI between classifiers and nouns is expected to be high.
we also assume that every classifier-noun or classifier-adjective pairing we extract is equally acceptable to native speakers.
neutral
train_98718
For example, most nouns referring to animals, such as 驴 (lǘ u, donkey) or 羊 (yáng, goat), use the classifier 只 (zhī).
(2010, 216) that show that children as young as three know classifiers often delineate categories of objects with similar shapes.
neutral
train_98719
Although it does not make it into the top three, MI for the SPATIAL sense is still significant.
it's possible that native speakers differ in either their knowledge of classifier-noun distributions or confidence in particular combinations.
neutral
train_98720
In fact, in standard English, inside sentences, only proper nouns start with upper-cased letter thus fine-tuning the pre-trained model fails to slough this pattern which is not always respected in Tweets.
to do so, we follow (tamaazousti et al., 2017) and analyse the units of Φ (biLStM layer) before and after finetuning.
neutral
train_98721
Furthermore, we have observed that despite the normalisation, the performances of the pre-trained classifiers were still much better than the randomly initialised ones.
indeed, as illustrated in the left plot of Fig.2, at the end of training, the distribution of the random units' weights is still absorbed (closer to zero) by that of the pre-trained ones.
neutral
train_98722
via maximum likelihood estimation), is the weight used for redistributing the preserved mass (e.g.
in neural LMs this remains an open question (Kawakami et al., 2017;Kim et al., 2016;Cotterell et al., 2018), while a common practice is pruning the training corpus and imposing closed vocabulary assumption (Mikolov et al., 2010) where rare words at training and unseen words at test are treated as an UNK token.
neutral
train_98723
Our data augmentation method uses UM inflection tables and creates additional training examples by finding Wikipedia sentences that use the inflected wordforms in context, pairing them with their lemma as shown in the inflection table.
finally, we shuffle the extracted sentences to encourage homogeneous type distribution across the entire text.
neutral
train_98724
The representation of a local l-gram window is obtained by an improved approach over Eq.
this illustrates the effectiveness of our proposed model from a general perspective.
neutral
train_98725
Nielsen and Chuang (2010) introduced three measures namely trace distance, fidelity, and VN-divergence.
the corresponding trainable components in state-of-art neural network architectures, namely, kernels in CNN and cells in RNN, are represented as arbitrary realvalued without any constraints, lead to difficulty to be understood.
neutral
train_98726
Experiments on benchmarking QA datasets show that CNM has comparable performance to strong CNN and RNN baselines, whilst admitting post-hoc interpretations to human-understandable language.
it is computationally costly to compute these metrics and propagate the loss in an end-to-end training framework.
neutral
train_98727
In total, each formulated question is accompanied with five candidate answers, including one # CONCEPTNET distinct question nodes 2,254 # CONCEPTNET distinct answer nodes 12,094 # CONCEPTNET distinct nodes 12,107 # CONCEPTNET distinct relation lables 22 average question length (tokens) 13.41 long questions (more than 20 tokens) 10.3% average answer length (tokens) 1.5 # answers with more than 1 token 44% # of distinct words in questions 14,754 # of distinct words in answers 4,911 Verifying questions quality We train a disjoint group of workers to verify the generated questions.
in total, five candidate answers accompany each question.
neutral
train_98728
Our results show that under limited textual context, models are capable of leveraging the visual input to generate better translations.
we analysed the behavior of state-of-the-art MMT models under several degradation schemes in the Multi30K dataset, in order to reveal and understand the impact of textual predominance.
neutral
train_98729
2017, which promotes fairness by requiring that the covariance between a protected attribute and a data point's distance from a classifier's decision boundary is smaller than some constant.
similarly, the TPR gender gap for occupation c is TPR g,c = P Ŷ = c | G = g, Y = c (10) where g and ∼g are binary genders and G is a random variable representing an individual's gender.
neutral
train_98730
Using λ = 0 leads to significant gender bias: the maximum TPR gender gap is 0.303.
given a sufficiently large number of clusters, CluCL is able to simultaneously mitigate multiple biases, including those that relate to group intersections.
neutral
train_98731
We begin to see some interesting trends in [16], who found that the use of the stereotypical sentence-final particle wa was positively valued when used by young women when attempting to convey a specific impression of femininity -'coquettishness'.
instead they were inclined to use sentence-final particles such as ne() and sa(..) either as a method of seeking support, and/or as a speech tactic to hold the floor in the conversation.
neutral
train_98732
[9] gives an in-depth look at the Scottish English of Morningside, and looks into age variation according to sex.
for the same reasons financial demands rise.
neutral
train_98733
An overall summary grade, representing facility in spoken English, is calculated as a weighted combination of the continuous measures and the categorical measures.
both the human scores and the machine scores for the Overall "facility" construct exhibit a reliability of 0.94, but we need to know if the machine scores actually match the human listener scores.
neutral
train_98734
• T is a finite set of lexical items (hereafter, called token) composing the discourse segments.
in our approach, the frequency of co-occurrences for each pair of lexical items is collected and the collocation measure between them is calculated.
neutral
train_98735
The SVD of the matrix W is then defined as the product of three matrices, where the columns of B contains the eigenvectors of W and / is a diagonal matrix containing the eigenvalues in descending order: The eigenvectors are normalized to have length 1 and orthogonal, which means that they satisfy the following condition: B TB= I. Decomposing a regular matrix into a product of three other matrices is not too interesting.
the truncated SVD matrix is used to show the high coherence relationship of the segments in the document, and to estimate the structure in segments across the document.
neutral
train_98736
Since each lexical item must be declared in the Cooking Lexicon, it offers an overview of lexemes used in the experiment.
this resultative meaning vitally affects acceptability of the sentence.
neutral
train_98737
The abstract case and tense can be spared, and the notion of trace is coped with in terms of structure sharing between NPs.
chafe (1970: chafe (1970: 132) also introduces a variety of derivational rules such as causative, inchoative, and resultative, etc.
neutral
train_98738
Although ba is a resultative verb, the resultative meaning does not reside in the lexical content.
the cooking domain involved here is very small, but functionable.
neutral
train_98739
With all this information properly encoded in the Cooking Knowledge, the parser is capable of distinguishing abnormal utterances like 'cut the oil' from logical sentences in the cooking domain.
in the matrix diagrams of ha, the resultative verb is lexically specified as `non-resultative' in SEMTYPE.
neutral
train_98740
A sentence like Sono Eikokujin wa wakaranakatta (That Englishman wa didn't understand) is better that Sono Eikokujin ga wakaranakatta.
compare Sachi wa nihon ye kaerimashita (Sachi wa to Japan returned) with Sachi ga nihon ye kaerimashita which in some cases (more cases than with `wa') suggests that Sachi is unique in her returning, or at least that Sachi makes up a complete list of (relevant) people who have returned.
neutral
train_98741
• It follows that there is an object u in w which satisfies `piza( t)' in every world wherè watashi' is relevant.
this means that `watashi wa piza' is a better answer, for `piza' may be singular thus making `watashi ga piza' peculiar.
neutral
train_98742
win-KA lose-KA-TOP time-ANP chance-COP-PRES "Victory or defeat depends on chance."
in Japanese, conjunctive coordination is expressed with the conjunctive particle "mo" while disjunctive coordination is expressed with the disjunctive particle "ka", as shown in (1):3 (1) a. Ken-mo Naomi-mo ki-ta.
neutral
train_98743
In cases where the whole set of passengers passed through either the west gate or the east gate, this sentence would not be felicitous.'
ken-MO Naomi-MO come-PAST "Both ken and Naomi came."
neutral
train_98744
Another partitioning use of "ka" is found in embedded questions as shown in ( Here are three variations of embedded questions in which we can see partitioning into two propositions.
in a non-indicative statement such as a generic statement, we find determinacy of the truths of related propositions and disjunction gets "and"-reading.
neutral
train_98745
In this case, the lack of clear-cut contrasts does not affect the legitimacy of a defining relation.
we appreciate very much colleagues at our corpus linguistics and lexical semantics working group for their input and discussion.
neutral
train_98746
chat can be measured in terms of its temporal duration.
the first is that diathesis alternations have not been extensively studied in Mandarin, unlike English, where as Levin notes, there were several important studies done on the verbs cut, hit, break and touch prior to her own work.
neutral
train_98747
In example (14a), the focus is on incremental theme and therefore the measure phrase describes the resulting number of wounds.
since they do have totally different (logical) event structures, previous theories may have to treat them as homophones.
neutral
train_98748
Our goal is to locate the linguistic relation that defines the contrast.
the first assumption is that lexical semantic contents are mapped to the morphosyntactic level and can be used to predict grammatical behaviors [e.g.
neutral
train_98749
Ker's work clews us the feasibility of pure linguistic approach in the resol utio of the alignment problem.
one of the best examples about semantic similarity between two languages is bilingual dictionary , almost all of the source word s have their translation in target language.
neutral
train_98750
(18) shows a possible derivation of the CP ai-te ar hi (17).
it is not argument-sharing but functional control that brings about thisi effect.
neutral
train_98751
In Nikkei's case, the error rate for adapted lexicons is generally higher and its improvement by adaptation is not as obvious as Kyodo.
in fact, it makes the best total performance.
neutral
train_98752
In our current experiments, we only sum up the word counts of correspondent vocabulary to have a new ranking, which represents the exploitation in both resources.
they are therefore chosen as our main targets and the following OOV statistics are basically referred to them.
neutral
train_98753
The segmentation error rate (= (substitution + insertion + deletion) / total words) is evaluated by comparing the reference (manually segmented) data with the automatic segmentation result, which is done by a statistical approach with a lexicon involved.
we don't need them in target lexicon because they can be treated as a sequence of Arabic digits or Kanji number segments 9, depending on how they are pronounced in speech recognition.
neutral
train_98754
Section 6 concludes the discussion.
for (2a), for instance, the predicate ahure 'to overflow' and the predicate more 'to leak' specify that the subject argument is some form of liquid, and the argument yokusoò bathtub' specifies further that it is water.
neutral
train_98755
Thus, this kind of data lends very strong support to the empty-category analysis or the analyses which turn to pragmatics exclusively to decide the semantic .
one of such behaviors concerns the identification of the antecedent (henceforth, the (semantic) target) of the IHRC.
neutral
train_98756
The targets in (2b-c) are not allowed to occur on the surface: ( If the target does not surface, there seems no straightforward way to syntactically accommodate the structure.
in Japanese, the expected reading is available only in the form of the iHRC.
neutral
train_98757
Meanwhile the two nominative NPs in the psych pattern are in a regular stative transitive construction.
in addition to providing an accurate reflection of the sentence meaning, it predicts the behavior of the first NP.
neutral
train_98758
b. FP: Sensayngnimi -i soncai-ka cakiii*j pan-eyse ceyil yeypputa.
the sentence does not say of John that he is good in general (the interpretation that would be required if coh-ta combined directly with John).
neutral
train_98759
For example, the tail, chaek-ul 'book-acc' in (3c) is moved to the left of the contrastive focus, Nuku-ka 'who', and constructs the presupposition, 'x threw the book' along with the verb tenchi-ess-ta 'threw'.
in (8B2), the old argument, Suni, is deleted without any informational function.
neutral
train_98760
The most transitive sentence in this respect is one in past tense, perfective aspect and indicative mood (for example 'you have built the house').
the definition I have proposed in this paper does not cover all the possible phenomena and does not always enable us to choose only one sentence (type) to present the prototypical transitive sentence of the language in question.
neutral
train_98761
The antecedents of the null anaphors in the above dialogue are related to the hotel reservation.
(28) Ul: kulayse incey ey cohci nayka kulay hanta hay kaciko hay therefore well oh good me so try determined Pollanikka talun kenun casinissnuntey swuhak-i mwunceyeyyo.
neutral
train_98762
Sets are incorporated into the rules and categories of CCG in order to allow flexibility for handling scrambled word orders directly with single lexical categories.
set-CCG also accepts scrambled orders with single lexical categories by incorporating sets into the categories and rules of CCG.
neutral
train_98763
As a result, representation comes to play an important role of bridging the and computational processing.
although they are pure conceptual constructs, tables are meta-theoretic objects to which mathematically explicit operations can apply.
neutral
train_98764
Suppose we have a simple sentence: (15) Mia loves Kim.
representation comes to play an important role of bridging the gap between human and computer interactions.
neutral
train_98765
In the same time they exhibit several important differences with respect to the syntax and semantics of the quantification in natural languages.
de re interpretation in which there is a specific movie and all students in the described situation are watching it.
neutral
train_98766
If we assume that THC > LPC > CCC (brief form for our convenience) in constraint hierarchy is correct in Korean, one can obtain the right anaphoric interpretation in psych construction as illustrated in tableu 31 In (32) John satisfies THC but violates CCC because it stands structurally lower than 'himself.'
even though LOC seems to be a weak-motivated constraint, I assume that it exists universally in both languages.
neutral
train_98767
It has been argued in the literature, however, that the dative argument occupies a higher position than other arguments.
(29a) with the nominative possessor is derived as in (30) In kono inu-no atama, the category TP/LTPpos is assigned to atama 'head' because it is not an independent entity but a part of it (a function taking possessors to return their heads).
neutral
train_98768
While there are expressions which were strongly associated with either female (e.g.
for example, having four legs, tails, beaks, wings, and so forth are some salient properties of animals, but they are not mapped onto the concept of woman.
neutral
train_98769
Putting this process into a general model, we may get the following (see 4.4 for further discussion): The property of the target concept is defined socially.
according to their view, "human thought processes are largely metaphorical" and the "human conceptual system is metaphorically structured and defined" (6).
neutral
train_98770
According to the conceptualization in Japanese, human male is considered a work force, whereas human female is considered untrained.
does not mean that human beings require the same kind of energy as machines.
neutral
train_98771
Compare (3a) and (3b).
while object topicalization is quite marked and appears primarily in spoken register in English (Lambrecht [8]), it is not so marked in Japanese and is allowed more freely in written register.
neutral
train_98772
(lb) requires no special intonation.
(6) and (lb) differ with regard to the information status of the logical object.
neutral
train_98773
How, then, can cases of symmetric projection be distinguished from those of asymmetric projection?
then, the lexical category go, and the determiner the, which is a functional category, are characterized as follows: (21) a. go: =0 (goal), =0' (agent) b. the : =n One of the distinctions that are standardly assumed between lexical and functional categories can be captured in terms of these two types of selectional features.
neutral
train_98774
TOP until waited but after all call TOP not existed 'John waited until (the train got to) Osaka, but after all he didn't have a call.'
moreover, the semantic feature [unspecific], which originates in ka and is crucial for the interpretation of the second phrase, projects up to both phrases.
neutral
train_98775
Licensees and licensors are features involved in movement.
as for the first question, Nishigauchi in [7] makes an important observation on pairs of examples such as (14a,b): (14) a. Dare ka kara henna tegami-ga todoita.
neutral
train_98776
Relations between positions as preposed (initial) and postposed and discourse functions of temporal, conditional, and causal adverbials have been done quite a few.
(12) in English is a counter-factual sentence (though there's no syntactic representation of counter-factual in Chinese, it is obtained through discourse).
neutral
train_98777
In section 2, we see how the connectors under temporal and conditional clauses are distributed in Chinese discourse.
consider the notion of time flow: it is more natural if we mention A first then B by sequence; the inversion of time sequence is not so natural but serves the function of topic continuity.
neutral
train_98778
Temporal clauses deal with time, and conditional clauses deal with hypotheticality.
the distribution is that temporal and conditional clauses tend to occur before their modified material (preposed/ initial), while causal clauses are more likely to occur after the modifiee (postposed/final).
neutral
train_98779
An alternative question presents two or more options for the reply.
in this paper, we deal with English alternative questions such as (1) within the framework of Head-Driven Phrase Structure Grammar.
neutral
train_98780
Conversely, if one of the clauses is non-ta-marked and the other is a to-form, the interpretation from the utterance time perspective entails that from the perspective of the matrix clause time.
in these foregoing studies, a feature MEDiUM-TiME was introduced as a mediator of temporal information between the subordinate and matrix clauses, which is only motivated by the difference in the tense interpretation of non-ta forms depending on their Aktionsarten.
neutral
train_98781
Traditional studies on tenses in complex sentences have referred to the tense marking in the subordinate clause as 'relative tense,' distinguishing it from the 'absolute tense' indicated by the matrix predicate (see for example Terarnura 1984).
although in (20) the embedded clause nai-te iru (is crying) is by default interpreted in relation to the tense of the matrix predicate, it can be extended with adverbials such as ima (now) and asoko de (over there) that make a reading more reasonable in which the embedded tense is interpreted in relation to the utterance time.
neutral
train_98782
This in fact keeps in line with the observations made so far, as long as the relative clause is involved.
they are not an exhaustive study of tenses in Japanese subordinate clauses: the subordinate clauses taken up there are limited to relative clauses, the quotative clause marked by to and to yu, the temporal clause introduced by toki (when), mae (before), and ato (after), and nominalization by no and Moto.
neutral
train_98783
Keenan Stavi 1986, Keenan 1996) considers that they result from the application of a discontinous determiner to a common noun.
examples are given in (29) and (30): (29a) every student will swim unless it is raining (29b) every student will swim except if it is raining (30a) Leo will go to the party unless he is tired (30b) Leo will go to the party except if he is tired Now it is clear that given the above equivalent sentences of the first type can be directly analysed in the same way as the sentences in which eXCL phrases occur and which are analysed in the preceeding section.
neutral
train_98784
Now we can also build up a rough semantic generalization of Table-1 of Korean PSIs.
b. Affective polarity items are licensed by nonveridical operators/connectives.
neutral
train_98785
In contrast, amwu-na reads as 'anybody, irrespective of the quality he has'.
(20) a. Jini-malko tto nwuka hapkyekha-ess-ni?
neutral
train_98786
This allows us to link the names of frames, frame elements, and other items in the database directly to the annotation tags.
in a sentence like The bartender asked for my iD, it is the individual who occupies that role that we understand as making the request, and the request for identification is to be understood against the background of that frame.
neutral
train_98787
But here the task of lexicography is more restricted: to characterize the combinatorial requirements and possibilities that are specifically associated with individual words rather than with the possibilities provided by the whole grammar of the language.
the word widow evokes a complex historical frame in which a woman is seen as having a particular social status because the man to whom she was (most recently) married has died.
neutral
train_98788
Moreover, the distributional differences that exist for Mandarin verbs of putting bai3 and fang4 (such as co-occurence with a progressive to describe a process (2a), taking a resultant object (3a), and being modified with an orientational adjunct (4a)) do not exist in English (see examples 2b-4b respectively).
in all the cases in (11) below, the locative role is a metaphorical extension to an abstract concept such as 'quandary' or 'trial.'
neutral
train_98789
He set/put the pin on the cushion.
conceptualizations of 'set' and 'put' in English and Mandarin have different semantic and syntactic entailments.
neutral
train_98790
The results of the last experiment provided results for "conceptual clusters".
when a cluster surfaces based on conceptual categories but not collocation statistics, inaccurate translations result.
neutral
train_98791
The specific CA technique used in this study is an agglomerative hierarchical method.
for languages with little morphology, a different approach is necessary.
neutral
train_98792
In this case, the implicit argument of else acts like a bound variable and what is obviated varies according to the quantifier to which it is bound: (10) a.
we should seek some other view on the exceptions.
neutral
train_98793
(26a) vs. (26b) below:7 Compl X Compl X (a) Specifier on the left (b) Specifier on the right It turns out there is a considerable difference both in terms of the number of LR actions performed and the stack depth required to process an example like formalize, analyzed as formal-i(z)-e in (27) below: 7Two other specifier-head-complement configurations can be obtained.
we define the local domain of an asymmetrical relation r as the smallest domain where r applies.
neutral
train_98794
The contribution of particular elements of a theory to computational complexity can be determined through experimentation.
to limit this for inc and caus, we appeal to semantic considerations: that is, inc and caus occur at most once per singular event.
neutral
train_98795
We also try to place emphatic particles in their proper place within overall grammar of the Japanese language.
(2) commits the speaker only to the possibility of somebody other than Taro also having come, whereas (3) to its truth.
neutral
train_98796
We specifically looked at sika and claimed that it is different from other Japanese NPIs.
14Taro-sika ookina-kemusi-no iru ringo-o tabe-nai Taro-SIKA big-worm-be apple-OBJ eat-NEG Only Taro eats an apple which a big worm is in' (15) Taro-sika kasikoi-tori-kara kakureta ookina-kemusi-no iru ringo-o tabe-nai Taro-SIKA wise-bird-from hiding big-worm-be apple-OBJ eat-NEG Only Taro eats an apple which a big worm that is hiding from a wise bird is in' sika construction can embed another sika construction, as can be seen in the following: (16) Taro-sikal Ken-ga eigo-sika2 ie-de hanas-anai2 koto-o sir-anail Taro-SIKA Ken-SUBJ English-SIKA house-at speak-NEG fact-OBJ know-NEG Only Taro knows that Ken speaks only English at home' Taro-sika is interpreted as associated with sir-anai, hence the same superscript, while the internally embedded eigo-sika is interpreted as associated with hanas-anai.
neutral
train_98797
And we derive the topics ranked by weights as follows: Egress, Focus, Posterity, Disease, Completion.
many references have shown that information in machine-readable dictionary (mRD) is an unbiased knowledge source for WSD (Guthrie et al.
neutral
train_98798
They reported conceptual representation acquired from MRD for a word sense did improve the precision of sense tagging on ambiguous words when compared with the corpus-based approach.
although it has good coverage (Farreres et a1.1998, Kwong 1998) and its synset is much like a thesaurus, the synset fails to provide an explicit classification.
neutral
train_98799
We sum up the above descriptions and outline the identification of relevant conceptual list algorithm as follows.
it would also be difficult to generate appropriate representation of conceptual categories for the WordNet senses.
neutral