id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_100000
Few studies have explored the oral production, which is a kind of instantaneous task where more attention is paid to semantics and pragmatics.
this section investigates the influence of English proficiency and production form on the frequency of use of the six types of tCC.
neutral
train_100001
We found that L1-Chinese learners at intermediate level tend to use more MDCs, while no difference in the use of BGCs is found between the two level groups, and that the incidences of MDCs and BGCs are substantially higher in oral production than in written production.
a certain relation holds between these two parts.
neutral
train_100002
The OC texts contain QA texts like dialogue.
table 6 shows examples analyzed correctly by the Vote model, but not by the All model.
neutral
train_100003
When we decrease during training, will stay away from core words.
the most important message in the experiments is a suitable manual constraint will certainly help the network extract more effective, robust features and achieves a better score than traditional attention mechanism does.
neutral
train_100004
Actually, features extracted by both CNN and RNN are, in some way, compatible and unique.
we observe that our model achieves new state-of-the-art result on this dataset.
neutral
train_100005
Table 1 shows that the corpus in the domain "books" is significantly larger than the others.
in real applications, the source domain S in the learning stage may be different from the target domain T in the testing stage, In this case, the classifier learned in the source domain S is ineffective in the target domain T .
neutral
train_100006
To illustrate, for the perceived-to-be-serious type, medical advice was always present together with medical diagnoses.
(in press), by coding levels of a variable as either 1 or -1, zero is between the two levels and represents the lack of difference.
neutral
train_100007
We observed two notable trends in the results.
frequency Polarity anxiety 2233 neg meds 1229 neu medication 934 pos disorder 698 neg psychiatrist 382 neu adderall 320 pos suicidal 316 neg disability 145 neg abusive 136 neg insomnia 131 neg Table 3: A set of words that appear only in the high f2 weighted group compared to the low f2 weighted group and their polarities derived by SentiWordNet.
neutral
train_100008
The overall architecture of Feature Attention Network is shown in Figure 1.
p) denotes a sequence of vectors, namely a matrix.
neutral
train_100009
We focus on the order of words among other styles and use the distribution of part-of-speech tags.
to address this issue, we propose Feature Attention Network (FAN), inspired by the process of diagnosing depression by an expert who has background knowledge about depression.
neutral
train_100010
In order to improve generalization, we used dropout and an L2 regularization.
our model uses smaller training data as input due to limited computing power, showing lower performance than the state-of-the-art model that uses three times more data than ours and is trained in a less interpretable way.
neutral
train_100011
'Hana poured water over Taroo, but he didn't get splashes of water.'
[-ar-ta > -at-ta] Note that, when the verb in (8) is causativized and turned into a ditransitive, the ambiguity disappears and the newly added nominative argument and the "demoted" dative argument are interpreted as an agent and an (intended) recipient, respectively.
neutral
train_100012
At the core of their analysis is the assumption that have introduces an argument but does not assign its own theta-role to that argument, whose interpretation is determined at LF (or at the C-I interface under current conceptions) based on have and its complement.
i assume that this conventional association is stored as part of our encyclopedic knowledge, which is invoked when a particular set of lexical items are combined to form a verbal predicate.
neutral
train_100013
Hitherto, we have distinguished the sentence final particle k'ɯ from the verb k'ɯ.
1 It is worth mentioning that k'ɯ is free to sentences indicating different time of events, while qu is confined to sentences indicating the future events.
neutral
train_100014
Adopting corpus analytical procedures, we identified two major must constructions, [must+be] and [must+"ve/have], whose central member [there+must+be+some] conducts the topic extending function while [she+must+"ve/have+been] is related to the speaker"s evaluation of the condition of an individual identified as she.
it is also found that when the construction appears at the beginning of an utterance, it is often preceded by markers like but, mm, well, er, yes, or yeah, which indicates the speaker"s readiness to launch a negative comment or counter argument.
neutral
train_100015
By treating modal semantics as ambiguous, the monosemous notion provides a common ground for the interpretation of different senses expressed by the modal verbs.
section four presents the result, section five discusses, and section six concludes the study.
neutral
train_100016
The rate of articulation does not reflect how much linguistic information, especially meaning as expressed by propositions, is conveyed or represented in a linguistic unit like a syllable.
more importantly, it is about "what" (meaning) is being communicated.
neutral
train_100017
Nevertheless, the children showed significantly greater variance (0.0004) than the adults (0.0001) (one-tailed variance test: F(34,28) = 2.59, p < .01).
how communicative efficiency of a human language can be quantified is not a simple question.
neutral
train_100018
If no such words were found in the sentence, a 0 score was applied.
the performance of VietSentiLex was evaluated by analyzing actual reviews from other datasets and domains.
neutral
train_100019
The Harvard General Inquirer (2002) has 4208 words that are classified as positive, weak positive, neutral, weak negative, and negative.
the number of lexicons decreased.
neutral
train_100020
Similarly and most famously, Halliday (1979;1985) indicated that speech and writing are both complex systems but in different ways: speech is more complex in terms of sentence structures while writing in terms of high lexical density.
it is an abstract machine that can be in exactly one of a finite number of states at any given time.
neutral
train_100021
Only with the measurement of both lexical and structural variations can we better index language complexity.
the ellipses in Figure 3 means the state jumps out from the current status into a new status (i+1 or 0 to i) and a same process is iteratively conducted.
neutral
train_100022
Ha Jin uses onceoccurring word types (60.2%) at a similar rate to that of the native speakers (61.0%), which is conspicuously higher than that of the advanced learners (51.6%).
as a point of reference, we obtained the chi-square value of a pair of novels by the native speakers -The Human Stain and Tinkers.
neutral
train_100023
Wu ( , 2018a, Wu & Kuo (2012), and so on, examine different temporal modifiers and their syntactic/semantic behavior.
under the scenario given above, yuánlái2 involves reported evidence, e.g.
neutral
train_100024
are considered involving epistemic modality.
in (4b), cāi 'guess' takes a stipulation or conjecture as its complement.
neutral
train_100025
While Wu (2012a) and Wu & Kuo (2012a) are on the right track in terms of the temporal semantics for běnlái and yuánlái, there are two semantic properties of yuánlái which have yet to be explained.
a re-examination of yuánlái2 is called for.
neutral
train_100026
In this scenario, the overall F1 score increased from 84.67 to 85.06 (see Table 3), which indicated that the external information from LM was helpful for the NER task.
while with the use of pre-trained language models, model parameters were not be updated during the iterative process of named entity recognition model.
neutral
train_100027
In this scenario, the overall F1 score increased from 84.67 to 85.06 (see Table 3), which indicated that the external information from LM was helpful for the NER task.
it is a low-resource language, Mongolian NLP resources are very rare.
neutral
train_100028
We evaluated the proposed models with a Japanese balanced corpus and confirmed the effectiveness of the candidate pruning by showing 0.056 point increase of accuracy.
(I NOM ) see v1 fashionable buildings ACC in the nearby shopping district DAT recently.
neutral
train_100029
Sharing the signifiers clearly does not entail affinity for the linguistic status, but sharing the signified could mean sharing the (lexical) semantic system representation of conceptual systems.
the greater the extent to which the meaning conveyed by the character and its radical is shared, the more transparent is the character.
neutral
train_100030
This is puzzling on one hand, but it is also widely accepted by linguists when terms such as Old English and Middle English are given.
with regard to combinability, 20 characters were selected for each of the categories.
neutral
train_100031
3) then there is an instruction ' Please select the word represented by the meaning of this picture 'presented on the screen for 1500ms.
they assumed that distractor stimuli could directly affect the stage of activation of phonologically related morpheme units in the formation level of the production network.
neutral
train_100032
Research showed that the communication of social support in online self-help groups was therapeutic to participants.
employing both qualitative and quantitative methods in future studies to scrutinize the communicative behaviors of users in OSGs and provide comprehensive results would prove beneficial.
neutral
train_100033
As shown in Figure 1, sentences where the subject has the controlling feature [+hon] got 4 Since there were no differences among different Non-subject GRs (i.e., AGREETYPE), we collapsed them into a single category "Non-subject" for the analysis of descriptive statistics.
in fact, the scrambled sentences were judged to be numerically worse than the canonical sentences.
neutral
train_100034
The current experimental results are consistent both with the analysis of A-scrambling as movement to SpTP and the hypothesis that HA and PC in Korean can only be controlled by the lower/thematic subject.
the relative impact of WORD ORDER in influencing the acceptability of HA was 0.10 compared to AGREEMENT.
neutral
train_100035
The example analogies given above are all irregular in form.
the L 2 norm delivered better results for their problem.
neutral
train_100036
To conceal means that the liar withholds some information that misleads the listener.
multidimensional analysis can classify linguistic features into dimensions systematically and scientifically.
neutral
train_100037
This may suggest that when people are telling truth, they tend to depict their feelings in more details, because they have really experienced that process.
(2016) recently studies deceptive utterances from the following five aspects: utterance length and specificity, complexity, hedging and uncertainty; comprehensibility; affect.
neutral
train_100038
Multidimensional analysis has the advantage of examining hundreds of linguistic features at a time and classifying them into several dimensions according to the frequency of co-occurrence among these features.
the results of some research are inconsistent with the findings mentioned above (Hauch, Blando ń-Gitlin, Masip & Sporer, 2012).
neutral
train_100039
We experimented with different thresholds and finally settled on minimum one content word and maximum three content words to be extracted between two NEs.
if there are too many content words intervening between two NEs, then it is likely they are not related at all.
neutral
train_100040
a tourist who is looking for some basic information about a place to visit.
we showed that query-based summarizers biased with a language model for a specific object type perform significantly better than standard querybased summarizers without such models.
neutral
train_100041
Other memory-based approaches to the problem of PP attachment can be found in [12] and [22].
table 4 shows the accuracies of systems trained and tested on the WSJ corpus.
neutral
train_100042
We compare the PP attachment module with a statistical full parsing approach [4] and analyze the results.
a shallow approach also has its shortcomings, an important one being that prepositional phrases, which contain important semantic information for interpreting events, are left unattached.
neutral
train_100043
Questions 18 and 20 were not correctly answered by any of the two systems.
lingPipe 5 and Freeling have been used for English and Spanish respectively.
neutral
train_100044
-the answer can be explicitly stated in one of the blog sentences, or a system might have to infer them; therefore, the answer is highly contextual and depends on the texts one is analyzing, the need for extra knowledge on the NEs (i.e.
firstly, we detect the overall sentiment of the blogs and subsequently a distinction between objective and subjective sentences is done.
neutral
train_100045
Furthermore, we created and annotated a set of questions and answers over a multilingual blog collection for English and Spanish.
the first one is the Question Analysis in which the language object of the study is determined using dictionaries with the criterion of selecting the language for which more words are found.
neutral
train_100046
The first one is the Question Analysis in which the language object of the study is determined using dictionaries with the criterion of selecting the language for which more words are found.
the authors would like to thank Paloma Moreda, Hector Llorens, Estela Saquete and Manuel Palomar for evaluating the questions on their QA system.
neutral
train_100047
All the abovementioned features make blogs a valuable source of information that can be exploited for different purposes.
however, we have seen cases in which each of three different consecutive sentences was a separate answer to a question.
neutral
train_100048
Feature Norm like Knowledge Acquisition For our experiments we choose the feature norm obtained by McRae and colleagues [6].
the pattern precision states how precise the selected pattern is in finding the properties in a certain semantic class and it is computed as shown at the end of the section 2.2.
neutral
train_100049
In order to harvest multi-word expressions and to achieve a better generalization across multiple similar sentences we use the following regular expression of a term definition: The abbreviation NPrep denotes a noun preposition and the straightforward abreviations Adv and Adj denote a adverb an an adjective respectively.
this paper introduces a novel method for acquisition of knowledge for taxonomies of concepts from the raw Wikipedia text.
neutral
train_100050
Identifying these three parts can be very useful, as they follow the classic rule of news writing: the first sentence is the most important one.
the way update is managed is very specific to our system, and taking into account its results should distort the pure summarizing results obtained by CB-SEAS.
neutral
train_100051
Our Spanish SO calculator (SO-CAL) is clearly inferior to our English SO-CAL, probably the result of a number of factors, including a small, preliminary dictionary, and a need for additional adaptation to a new language.
similar to our results, Chinese lexicons created by translating English lexicons did not help performance.
neutral
train_100052
The most interesting difference was the fact that verb forms in Spanish provide irrealis information.
original refers to all the 1,600 original versions and Translated to all 1,600 translated versions.
neutral
train_100053
It is very simple to use someone else's work just by replacing any numeric values, in combination with any necessary rewording.
the resulting word does not need to be a real English word.
neutral
train_100054
After a valid root is found, the denite article is determined and then the search hits for the generated plural form(s) are obtained.
after the disambiguation phase, only the verb and the adjective paradigms for gebruind survived' and they are correctly taken to be the nal outcome.
neutral
train_100055
Non-comparable adjectives like amorf (amorphous) have only the base and the base inected form.
a vowel is long if it is doubled (e.g., maan moon), if it is in a vowel combination (lief sweet, dear) or when it is at the end of a syllable (maken to make).
neutral
train_100056
Such adjectives have only a base form and thus, no valid paradigm is found for them.
we search for occurrences of de + root and het + root and return the article with the higher number of search hits.
neutral
train_100057
There are 188 cases of morphological ambiguity.
the corpora have already been parsed with the Alpino parser and many words have been added to the lexicon of the Alpino grammar.
neutral
train_100058
Restricted only by that limited knowledge the FSTs, in analysis mode, identify all possible forms and roots allowed by the word structure.
for almost each adjective three analyses will be delivered boost (angry): *boo, *boos and booz.
neutral
train_100059
Supervised machine learning techniques are still superior to unsupervised machine learning techniques for many NLP tasks.
inflection is a productive, but rather simple (in comparison to languages like Spanish or Finnish) morphological process in Afrikaans, with nine basic categories of inflection, viz.
neutral
train_100060
measured as entropybased on the annotated datathat is inversely related to the voting confidence of the classifier).
contrary to intuition and to results for AL in other areas than language processing, it is the selection of less prototypical instances first that provides the best improvement, both for word frequency and word length.
neutral
train_100061
First, we will explore ways of improving these results.
third, we asked four professional lexicographers to manually assign synonyms to definitions.
neutral
train_100062
Polysemy was defined by the number of definitions assigned to the verb by the TLFi.
the definitions of a synonym associated with a given verb usage (reflexive vs. non-reflexive) were compared only with the definitions of this particular usage.
neutral
train_100063
His experiments on PERSON type factoid questions have not shown a considerable improvement.
from TREC 2006 QA dataset are shown in Table 2.
neutral
train_100064
An empirical evaluation using TREC 2006 QA data set showed significant improvements using our query expansion method.
the same technique was found effective for the ad-hoc retrieval task.
neutral
train_100065
A systematic relationship between the outcomes of extrinsic evaluation and properties of a system's output can indicate directions for improvement in output, leading to improvements in the system's utility in its target setting.
in this work, we used two definitions of informativeness: (i) the number of (clinical) events NE that a text references; (ii) the length of the text in tokens (words) NW.
neutral
train_100066
In what follows, we summarise the main observations, discussing their implications in Section 7.
the significant effect of main target action does suggest that some of the burden is carried by the content selection strategies in the H and C texts, with the human-authored summaries incorporating more information that was relevant to the appropriate actions.
neutral
train_100067
This evaluation is similar to the one relying on manually assigned user-features.
illustration: The content presents an illustration of a concept or a process, either through the use of images, or through diagrams.
neutral
train_100068
In the fine grained evaluation, all four dimensions are considered, and thus we run a four-way classification.
instead of using the user annotations, we use the output automatically predicted by the classifiers.
neutral
train_100069
Moving to State S6 is triggered by the word the, which has neither backward nor forward dependencies; however, it is linked through a chain of dependencies with a future word board.
here, we examine the effect of lookahead features on the supertagger, operator tagger and dependency results.
neutral
train_100070
Instead the raw frequencies are compared directly.
for this method, a bilingual dictionary and a small amount of parallel data for the ME classifier is needed.
neutral
train_100071
Sentences are ranked by this score, and the highest-scoring sentences are selected for the summary.
furthermore, the topic space is much smaller than the original term vector space.
neutral
train_100072
Runs realized changing the specific parameters of Random Indexing as the number of dimensions and the number of training cycles show that the optimum partition realized with the Sub-target vector algorithm using 9 subtargets doest not change significantly (between 0.740 and 0.746 for the Precision and between 0.708 and 0.718 for Recall).
target vectors will be more sensible to typical documents and less sensible to non-typical documents.
neutral
train_100073
The system gives 92.1% recall and 89.7% precision over the test corpus (247 test items) in WD, M1 and M2.
our strategy to obtain the best disambiguation option was to chose first the morphosyntactic disambiguation level, and then we selected the best option for syntactic disambiguation.
neutral
train_100074
Then the gen is defined as follows.
we prepared the development set from section 21 of the treebank as in [5].
neutral
train_100075
Compared to [5,6], we also extend the definition of "centroid" from a word to an entity; and target at linking extracted facts instead of sentences.
a high-coherence text has fewer conceptual gaps and thus requires fewer inferences and less prior knowledge, rendering the text easier to understand [1].
neutral
train_100076
Our basic underlying hypothesis is that the salience of an entity e i should be calculated by taking into consideration both its confidence and the confidence of other entities {e j } connected to it, which is inspired by PageRank [13].
we also take into account other events that have already been generated to maximize diversity among the event nodes in a chain and completeness for each event node.
neutral
train_100077
2 The parsing model of MSTParser has the advantage that it can be trained globally and eventually be applied with an exact inference algorithm.
among the heuristics considered were majority votes for constituents and a similarity-based measure for complete trees.
neutral
train_100078
Under these conditions, an analysis having only a few grave conflicts may be preferred by the system against another one with a great number of smaller constraint violations.
in its interaction with external predictors WCDG should typically decide about the alternatives.
neutral
train_100079
In this respect, the increase in the structural precision of the PP attachment seems worth mentioning.
mSTParser as a statistical parser trained on a full corpus becomes a strong competitor for a PP attacher that has been trained on restricted four-tuples input.
neutral
train_100080
This comparison shows that the new examples have a radically different sense distribution than the SensEval data.
we have to keep in mind here that we only added a minimal number of examples (141).
neutral
train_100081
The quality of the lexicon has been evaluated with respect to adjectival tables listed in [13].
this will also allow us to validate the remaining treelex frames and verify performance for individual adjectives.
neutral
train_100082
We consider the propositional constituents in (5) the extraposed subject of the adjective, i.e., in impersonal constructions (the subject is il or ce), OBJ is the subject of ATS adjective.
in such cases, the subject of the adjective can be easily identified as it is indicated by the grammatical function of another argument of the verb: SUJ for ATS, and OBJ for ATO adjectives.
neutral
train_100083
Therefore, no other linguistic observations can help us specify the status of PPs in APs.
this list contains 177 adjectives found only with a basic frame in the corpus and there are 127 adjectives occurring with different frames in text.
neutral
train_100084
However, building large and rich enough predicate models for broad-coverage semantic processing takes a great deal of expensive manual effort involving large research groups during long periods of development.
the original SSI algorithm is very simple and consists of an initialization step and a set of iterative steps.
neutral
train_100085
Again, the the best results are achieved by both FSI and ASI variants on nouns and adjectives, and FSP on verbs.
remember that FSP always uses I and the first senses of the rest of words in P as context for the disambiguation.
neutral
train_100086
Doing so naturally increases all word frequencies, turning low frequency words into high frequency ones.
instead, removing input data would do the job, by turning high frequency words into low frequency ones.
neutral
train_100087
They both, together with a Baseline, have been evaluated in TIMEX3 identification for English and Spanish.
the presented approaches have been tested in tE identification within the previously described corpora and the results have been compared to the original tIMEX3 annotation.
neutral
train_100088
On the one hand, taking into account that same quality results have been obtained for English and Spanish using the same approach, this study will be extended to other languages to confirm if the analyzed hypothesis could be considered multilingual.
due to the fact the presented approach is based on semantic roles and multilingual semantic networks information, it could be valid also for other European languages that share several features at this level.
neutral
train_100089
Although originally intended to provide evidence of how much redundancy should ideally be included in generated anaphoric descriptions, preliminary findings reveal a number of little explored issues that are relevant to both referring expressions generation and interpretation.
this level of variation was deemed necessary to avoid domain and other linguistic effects 5 , and also to prevent the subjects from relying on memory.
neutral
train_100090
Constraintbased systems have the succinct advantage that in principle any modelled property of the target structure -be it linguistic or non-linguistic -can be constrained by the mere addition of appropriate further constraints.
the predictor can only provide dependency scores for word pairs -and not for syntactically more complex units such as phrases or clauses.
neutral
train_100091
The experiments described above lead to the conclusion that we need a hamza module that can take the raw text and restore the hamzas if necessary, before the text is passed to the diacritization system.
the first step towards this problem is a system that reaches a high accuracy in an in-vitro evaluation, i.e.
neutral
train_100092
They reach a WER (for diacritization) of 16.5% on conversational Arabic.
these studies leave the lexical context of words for the most part unexplored.
neutral
train_100093
Abrupt polarity changes between (sub)contexts are boosted by v : for example, a NEG (sub)context followed by a POS one may indicate a shift in perspective or negation.
we believe that access to these rich layers is required for deeper logical sentiment reasoning in the future.
neutral
train_100094
In the near future, we plan to further evaluate the Leffe by comparing the coverage and precision of different deep parsers that rely on the same grammar but using different morphological and syntactic lexicons such as the Leffe.
we would like also to thank the group Gramática del Español from USC, and especially to Guillermo Rojo, M. Paula Santalla and Susana Sotelo, for granting us access to their lexicon.
neutral
train_100095
In order to obtain such behavior, we simply bypass the internal lexicon.
the final result is a morphological and syntactic lexicon with an important coverage in terms of morphological information but a more restricted one in terms of syntactic information.
neutral
train_100096
For example, all verbal lemmas that were not covered by ADESSE or SRG received the following subcategorization frame: <Suj:sn|cln,Obj:(sn|cla)> (transitive verb with optional direct object).
we used a corpus built from a subset of the Spanish part of the Europarl 19 containing approx.
neutral
train_100097
Results of Classifiers 1 and 3 indicate that learning the tasks jointly produces a moderately better performance.
the ranking process customized to semantic dependencies is suboptimal.
neutral
train_100098
Algorithm 2 for findingȳ.
we chose the parameters for test data.
neutral
train_100099
Here, we discuss the computational complexity of the learning.
it reserves oversized margin against mistakes of the label elements during learning.
neutral