id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_94900
We calculated the LDs between the focus variants and respectively their historical correction and their contemporary correct counterparts.
the essence is that all the characters in the alphabet are assigned a large numerical value which puts them apart in Euclidean space at fixed and known distances.
neutral
train_94901
In fact, since we pursue unsupervised fully-automatic OCR post-correction, we should only be interested in the bestfirst ranking scores.
it is pitfalls like this that the Nederlab project (Brugman et al., 2016) is hoped to help prevent in the future, by allowing scholars to register exactly the collection of texts their researches were based on.
neutral
train_94902
In this we are waylaid, faced with the more disappointing recall in our current evaluation.
it is the feature of digitized diachronic texts most often commented on by researchers working on digitized text (cf.
neutral
train_94903
predicate argument structures) and various transformations such as alternations.
in argument expression, the patterns presented in the previous section are realized as language utterances with some specific syntactic constructions proper to argument expression (section 4.2).
neutral
train_94904
Witnesses in their accounts of one and the same event will differ in a variety ways, for example in their choice of words or the attention they give to certain details by elaborating upon them.
we limited our material to the 9 interviews labeled in the database with tape numbers 208, 230, 348, 374, 488, 490, 508, 519, 549.
neutral
train_94905
Note that '|' implies the potential segment position.
we annotated a bilingual discourse corpus in Chinese-English language pair, which contains nearly 20K sentences.
neutral
train_94906
 Structure: discourse structure links EDUs together, mainly based on the structure of a sentence.
we resort to the monolingual EDU relation to determine the BEDU relation.
neutral
train_94907
The two annotators classified 50 instances for each connective according to the PDTB 3.0 schema (27 senses) with the additional option of marking an instance as not being a connective.
we discussed different means of obtaining such semantic information, as well as our way of representing various kinds of ambiguities in the lexicon.
neutral
train_94908
In as far as the lexical semantic content of discourse connectives is taken into account in the semantic computation, SDRT relations therefore can be mapped well onto connectives.
3 This way, we extracted between one and 19 attested possible relations for 126 connective types that occur in the PCC.
neutral
train_94909
References to disabilities supports all kinds of syntactic, morphological, and semantic variations.
it focuses on the disabilities associated with a specific RD.
neutral
train_94910
• Pendred syndrome (PDS) is a clinically variable genetic disorder characterized by bilateral sensorineural hearing loss and euthyroid goiter.
in this paper myopia, hyperopia, astigmatism, cataracts or glaucoma are not considered a disability since they are not permanent.
neutral
train_94911
In training, we compute the PMI of a word pair wi and wj, where wi and wj are in ds1 and ds2, respectively, and select the top 30,000 word pairs of higher PMI.
the HIt Chinese discourse relation treebank (HIt-CDtB) is used to evaluate the proposed models.
neutral
train_94912
The most important change is the addition of speechact relations in the sense of Sweetser (1990).
other studies that have used the PDTB framework to annotate spoken data are Demirşahin and Zeyrek (2014) and Stoyanchev and Bangalore (2015).
neutral
train_94913
(5) but times have changed, even in Utah so Mr. Redford no longer stands out as an extremist (6) I 've already had a meeting uhm an update meeting so the place hasn't burnt down or anything We annotated those instances as CAUSE BELIEF, following the revised version of the PDTB 3.0 (PRAGMATIC CAUSE in PDTB 2.0).
for example, in our data we found frequent uses of pragmatic discourse relations, both epistemic and speechact, thus confirming the usefulness of the revised PDTB 3.0 hierarchy.
neutral
train_94914
Looking at the categories that CCR distinguishes but the PDTB doesn't, it is revealed that the PDTB often does not distinguish between the source of coherence in its relation labels.
this dimension applies to causal, conditional and temporal relations.
neutral
train_94915
We will sum over the corpus, how often these common subgraphs occur and how likely they can be mapped to each other based on the relations.
furthermore, we extend the scope of analysis and look for connected components common to both structures.
neutral
train_94916
In our proposed annotation scheme, we mark not only the speakers and listeners but also their coreference chains.
the compatibility allows, on the one hand, direct exploitation of corpora where coreference is already annotated under these guidelines; it also facilitates, on the other hand, use of our corpus as coreference data.
neutral
train_94917
For example, (i) a standard variant is used for writing communication which is well documented by dictionaries and grammars; (ii) these languages have standard orthographies and the majority of published texts adhere to these orthographic norms; (iii) large amounts of text are electronically available and can be used for developing NLP tools and resources.
in addition to results using the baseline approach two systems were trained using Phonetisaurus: • learning only from pairs corresponding to OOVs (pairs where the historical forms and normalized forms are different) • learning from all the pairs, including pairs where historical form and standard one are the same The results are similar to those we achieve in previous works on dialectal corpora.
neutral
train_94918
Mann and Yarowsky (2001) documents a method for inducing translation lexicons based on transduction models of cognate pairs via bridge languages.
most of these characteristics are not shared by historical and dialectal text resources and, therefore, standard NLP tools can often not be directly applied to such corpora.
neutral
train_94919
), various types of conversion, and a great heterogeneity as far as morphological orientation is concerned, cf.
most lexemes are connected to each other in both directions.
neutral
train_94920
In addition, due to the semantic nature of PCs they almost exclusively occur with either an indefinite or definite determiner which precedes the non-head in English (and German).
the evaluation of the method showed that the accuracy of 76% could be improved by adding a step in the PC compounder module which specified user-defined contexts being sensitive to the part of speech of the non-head parts and by using treetagger, in line with our approach.
neutral
train_94921
Given the differences among different DAs, this is also helpful for Arabic native speakers from different dialects.
many software tools were designed to help learners by providing dictionaries, expert feedback, carefully designed "immersive" experiences, etc.
neutral
train_94922
The lemmas for which the second rule applies are difficult to identify in a list of over 50,000 entries.
the last part consists in transferring the database from old to new German spelling.
neutral
train_94923
While the information about these special characters of a lemma can be easily extracted from the related entry in GOL, this is not always possible for the morphological analysis of the deepest morphological layer.
as it was developed in the Nineties, both encoding and spelling are outdated.
neutral
train_94924
Indeed, the score associated to "bon" (good) might be different to the score associated to "bon" when it is modified by "pas" (not).
for the phrase "il a une très bonne odeur" (it has a very good smell), "très bonne odeur" (very good smell) is a target phrase rather than "bonne odeur" (good smell).
neutral
train_94925
During annotation we achieved an interannotator agreement of 0.72 (Fleiss' κ).
for each approach the number applications and reviews used as well as the app store (Apple App Store (A), Google Play Store (G) or BlackBerry World (B)) they originate from are given.
neutral
train_94926
In exact mode the predicted text spans of aspects and subjective phrases must exactly match those of the gold standard.
many reviews of the two categories focus on the accuracy (e.g.
neutral
train_94927
The out-of-domain Subtask 3 had no participants.
the Museum domain had not been addressed in previous years or in other languages, so we had to compile a new set of guidelines for this domain.
neutral
train_94928
4 The primary reasons for choosing this program were a) its non-commercial license, b) its portability to a wide variety of platforms, and c) a mature set of annotation features, such as possibility to create link attributes (used for coreference), mark overlapping elements, and assign multiple annotations of the same class to one token (which was heavily used by our experts for the cases when one sentiment statement was included into another opinion, e.g., She loves this ugly jacket).
working completely independently, one of the experts has annotated 78.8 percent of the full corpus, whereas another coder has labeled the complete dataset.
neutral
train_94929
We also estimated the chance agreement p c in the usual way as: where c 1 and c 2 are the proportions of tokens annotated with the given class in the first and second annotations respectively, i.e., c 1 = A 1 T and c 2 = A 2 T .
we are currently testing our annotation guidelines on a new group of undergraduate students to see whether the provided descriptions generalize to other coders as well.
neutral
train_94930
it overclassifies many tweets as opinionated).
a positive score of +2 and a negative score of -1 would have a combined score of +1.
neutral
train_94931
This has been rarely addressed for sentiment analysis, due to the lack of available training data and the difficulty of manual annotation.
fair evaluation of such systems is fraught with difficulty, partly because the task is often subjective (human annotators do not agree on what is correct), and partly because comparing systems fairly is complicated when they tackle the problem in different ways.
neutral
train_94932
This way, the machine learning model can also learn from tweets containing explicit expressions that do not include one of the tagged emotion word and implicit expressions of emotion.
such tweets have usually been excluded from existing gold standard corpora (Hasan, Rundensteiner, & Agu, 2014;Mohammad et al., 2014) to reduce complexity.
neutral
train_94933
With the basic emotions (happiness, sadness, fear, anger, disgust and surprise) accepted as the state-of-the-art, existing emotion corpora and other language resources that serve as the basis for building and evaluating mechanisms to detect emotion in tweets are only annotated with those basic categories.
we then collected tweet streams using these 82 @usernames as the query terms from the Twitter API.
neutral
train_94934
Note that since the multi-word phrases and single-word terms were drawn from a corpus of tweets, they include a small number of hashtag words (e.g., #wantit) and creatively spelled words (e.g., plssss).
a more natural annotation task for humans is to compare items (e.g., whether one word is more positive than the other).
neutral
train_94935
2009) and for clustering we used agglomerative clustering, implemented in CluTo .
we learn from ConceptNet (Singh et al., 2002) properties of objects, actions and phenomena which cause certain emotions.
neutral
train_94936
The precision of the obtained list at the level 1,000 first entries was estimated as 78.6%.
the participants should find solutions to overcome the lexical dependence on the training collection.
neutral
train_94937
There are other studies in literature (e.g.
the idea of combining word embeddings and external knowledge to specialise them for a specific task has been actively explored in the last few years (Wang et al., 2014; Levy and 1 https://code.google.com/p/word2vec/ Goldberg, 2014;tang et al., 2014;Yu and Dredze, 2014;Wang et al., 2015;Pham et al., 2015;Liu et al., 2015), but all these studies extend the embeddings at word level.
neutral
train_94938
In literature, there is a large bundle of studies for injecting some level of word order information into word embeddings (Qiu et al., 2014;Johnson and Zhang, 2015;Lai et al., 2015;Ling et al., 2015;Trask et al., 2015) or into general word-space models (Sahlgren et al., 2008;De Vine and Bruza, 2010;Basile et al., 2011).
see figure 3 for a schematic view of the order preserving model.
neutral
train_94939
We propose to further extend PV models to insert such kind of information and specialise them for text polarity detection.
we refer to the work of (Trask et al., 2015) that extends the CBOW model proposing to split the context into two partitions, the first containing the sum of all the context word embeddings before the target word and the other containing the sum of all the word embeddings after.
neutral
train_94940
In the experiment reported hereafter, these three properties (valence, arousal, domination) are interpreted as dimensions in a R 3 vector space.
we question this method by measuring the correlation between the sentiment proximity provided by opinion resources and the semantic similarity provided by word representations using different correlation coefficients.
neutral
train_94941
There is comparable little work with respect to visualisation in lexical semantics: Kievit-Kylar and Jones (2012) present a tool that allows to identify relations between words in a graph structure, where words are visualised as nodes, and word similarities are shown as directed edges.
optionally, the user may provide text files with the gold standard and/or automatic class assignments, relying on two commaseparated columns per line: word, class .
neutral
train_94942
 app.mikolov.models.overrideset it to true for overriding the default word models.
for instance, labeling aligned chunks with the underlying semantic relation type and computing semantic similarity scores for them would be extremely useful for explaining or interpreting why two texts are similar or dissimilar.
neutral
train_94943
A chunk can have only one alignment and once aligned, it is not considered for further alignment.
at that time it only used gold chunks of the given text-pairs; these chunks were provided by the task organizers.
neutral
train_94944
For each annotated reading, we then automatically collected morpho-syntactic and semantic features from the LVF.
abs Features relating to abstractness or concreteness.
neutral
train_94945
The first set of features ('syn/sem' features) provides information on the morphosyntactic make-up of the verb, its argument structure and potential adjuncts, and indicates which suffixes are used in nominalisations and adjectives derived from the verb, when those are available (note that derived nouns and adjectives are coupled to a particular reading only if they preserve the meaning of the verb on this particular reading).
weak accomplishment readings are primarily telic, but nevertheless share some properties with activity readings (see Piñón (2006)).
neutral
train_94946
On one hand, these were manually assigned gold aspectual values on the telicity scale of eight values (as described in the previous section).
in this work, we focus on the way the lexical aspect of a same verb varies across its different readings, using the lexical resource "Les Verbes Français" (François et al.
neutral
train_94947
There are only 16 unambiguous verbs, while the largest number of readings is 37 and 19 (for 1 respectively 2 lemmas).
features indicative of concreteness are characteristic both of the VARiable and TELic classes.
neutral
train_94948
The conditioning of sparsity on the maximum activation area G max i .
two distinct models are built, namely, domain and function spaces.
neutral
train_94949
For instance, for an application involving a simple interface to a robot, quantifiers may not be needed.
we describe resources aimed at increasing the usability of the semantic representations utilized within the DELPH-IN (Deep Linguistic Processing with HPSG) consortium.
neutral
train_94950
How to Lose Weight is a title of a document which contains 5 step titles, including Keep your own personal food diary and determine your weight loss goals upfront, Have a balanced diet, Avoid skipping meals, Eat food from home, and Learn to love fruit.
the results show (1) 20.66% of the recommended subtasks appear in wikiHow, (2) 28.98% of the steps in wikiHow appear in our recommendation set, and (3) 47.04% of the recommended subtasks not appearing in wikiHow are labelled as correct by human assessors.
neutral
train_94951
Indeed, semantic vectors built for a single language were able to predict in an unsupervised setting typological patterns that hold across languages.
all models produced statistically significant correlation with typological data (p < 0.01).
neutral
train_94952
The list of contexts includes nouns presupposing both direct and figurative usages of the adjective in question (e.g.
it has been argued that patterns like 'sharp'→'spicy' are not arbitrary but are motivated by an inherent semantic basis, which is common for different languages (Rakhilina and Reznikova, 2013;Koptjevskaja-Tamm et al., 2015).
neutral
train_94953
Consider the following example: RT @NewAppleDevice: Apple's smartwatch can be a games platform and here's why http://t.co/uIMGDyw08I It contains factual information that can be understood even without visiting the link.
given the 4000 tweets on the four topics mentioned above, Table 1 reports on the number of tweets annotated as arguments.
neutral
train_94954
Finally, some conclusions are drawn.
if tweets contain pronouns only (preventing the understanding of the text), we consider such tweets as not "self-contained" , and thus non arguments.
neutral
train_94955
Extended syntactic and semantic information was added to the lexicon, in order to formalize and process them.
the module can be downloaded from the NooJ website 2 .
neutral
train_94956
We believe that this is due to the composition of the feature set we use.
this does not reflect the reality in forensic scenarios: in practical linguistic forensic investigations, the resources that are available to profile the author of a text are usually scarce.
neutral
train_94957
As table 6 is sorted in order of decreasing positive training instances available for a research topic, we observe that topics with more positive training instances tend to be more lossy.
we investigated the task of classifying funded grants descriptions into research topics.
neutral
train_94958
Automatic assignments are useful for decision making only if they are very precise.
initiative, holding institution and department (table 1), were sentence split and tokenised.
neutral
train_94959
We employ separate views for the free text fields, the award details and the principal investigator's name.
as shown in table 8 we computed average scores for research topics for which the 5 agreements method achieved an F-score higher than 75%, i.e.
neutral
train_94960
Collaboration among editors with different skills is essential to developing high quality articles (Kittur and Kraut, 2008).
in detail, we used two of the multi-label classifier implemented in Mulan (Tsoumakas et al., 2011) with ten cross validation.
neutral
train_94961
There are three main future particles in YEMS: š, ς, y.
• POS5 is a reduced tag set of five tags based on traditional Arabic grammar.
neutral
train_94962
Moreover, while the word grammar in Humor describes productive Hungarian compound constructions quite accurately, the description of compounds in the morphdb.hu grammar is extremely simple: it allows an arbitrary number and type of nominal stems (nouns, adjectives or numerals) to occur in a compound in an arbitrary order (accepting for example the non-existing tevepiroshatvannegyven, 'camelredsixtyforty' as a correct compound), this tool returned much more nonsense analyses with the productive compounding function on than the other two analyzers.
jmorph is implemented in Java.
neutral
train_94963
These annotations are then used to compute the gold standard using different methods, and a series of regression experiments is conducted to evaluate their impact on the performance of a regression model predicting the degree of naturalness of L2 speech.
additionally, to add some more specialised (but still text-independent features) we also compute duration, rhythm, and prosodic features derived from a segmentation of the recordings into vocalic and consonantal intervals, and from pseudo-syllables derived from that segmentation.
neutral
train_94964
When we employ the default cross-validation procedures of toolboxes such as WEKA or sklearn, we obtain (with 10 folds and late fusion of all six feature groups) ρ=0.715.
these features were computed using the segmentation of pauses, vowels, consonants, and speaker noise derived from the PR.
neutral
train_94965
In French, fricatives and plosives at the end of words can also occur as voiced consonants -in contrast to German where final devoicing applies.
it provides spoken data by many speakers with comparable productions, annotated and segmented on the word and the phone level, with a substantial amount of hand-correction.
neutral
train_94966
Of course, in this case, general frequency effects and the diversity of situations of the data collection (which always corresponded to naturalistic settings) constrain the results.
for instance, "colar" is ambiguous given that it may correspond to the common noun 'necklace' (in which case the word should be tagged as "CN|colar") or to the infinitive form of the verb 'to glue' (in which case it should be tagged as "V|colar").
neutral
train_94967
In the worst cases, which is the case for the majority of the 6000+ languages spoken in the world, no computational lexicons are available 1 .
since the multilingual Wikipedia project encourages the production of encyclopedic-like articles in many world languages, we can find there an ever-growing source of text from which to extract these two language modelling elements: word list and frequency.
neutral
train_94968
Place names are often given with their original language names; national anthems can be listed with their original language lyrics, as well as translations; book, song and ' Anarchism ' is a political philosophy that advocates stateless societies , often defined as self-governed , voluntary institutions , but that several authors have defined as more specific institutions based on non-hierarchical free associations .
a useful first step for under-resourced would be to provide at least a list of known words in that language.
neutral
train_94969
In recent years, several datasets have been released that include images and text, giving impulse to new methods that combine natural language processing and computer vision.
the data are stored as one JSON-LD file per news website.
neutral
train_94970
The corpus consists not only of words but also compounds and phrases.
these corpora often contain few users, but a higher number of repetitions per user to improve recognition performance (Cooper et al., 2011).
neutral
train_94971
To record realistic data (but not necessarily played by elder person), people under 60 were recruited.
cIRDO may operate a transfer of the risk recognition: It is the technology that gives official status to a fall in a domestic incident.
neutral
train_94972
There are a number of limiting factors, among them standardization and difficulties in manual annotation.
among them, 637 types have been recognized as conventional lexical units and have become part of the lexical database.
neutral
train_94973
Due to organisatory reasons five teachers explained the task to a 'knowing' learner, who was already acquainted with the task but instructed to act as if he/she did not know the task.
both handles are marked with colours.
neutral
train_94974
The learner is only observing while the teacher is explaining and conducting the task.
further tiers with information derived from recording force (Task 2) and motion data (Tasks 1-4) will be annotated.
neutral
train_94975
Also, each multi-media document (e.g.
a cluster which contains segments with different labels is divided.
neutral
train_94976
We also claim that it is feasible to learn a certain set of rules, e.g.
variations within the "complete" segments can be considered as naturally occurring variants that need to be captured by a learning model.
neutral
train_94977
After a short introduction the investigator hands over a list of tasks to the participant and leaves the room.
by adding logical or semantic information to our database learning could be enhanced.
neutral
train_94978
(2012) talk about sign language corpora of videos, and present a project that aims to translate visual sign language to written spoken language, with a written sign language middle step.
as it is common in sign languages, some words in spoken languages have no sign.
neutral
train_94979
The IMAGACT data accounts for the semantic competence, separating the contexts from the metaphorical and the idiomatic expressions.
the animated videos as demonstrated in Figure 2 illustrated on the IMAGACt platform exhibit some ambiguity with reference to different types of actions.
neutral
train_94980
We use DBpedia as a supervisory tool to guide the clustering process, so that the structure of learned ontology is more similar to Wikipedia concept hierarchy.
we cluster different kind of relations between terms into groups.
neutral
train_94981
Ultimately, it is the need of the users that decide which degree of coverage is acceptable.
it should also be offered to teachers and authors of text books for higher education.
neutral
train_94982
We therefore added a Difference measure, which shows the difference between the coverage for each list in the two corpora.
the choice between them was decided by testing coverage (see Section 5).
neutral
train_94983
This means that a lot of academic words have been lost compared with the G&D-2.6_0.3_0.6_3.2 list, which has 6.74 in KIAP.
this turned out not to be enough.
neutral
train_94984
In the case of the example sentence 'Ford hires John as their new CEO', the instantiation of the defined situations for the event instance of eso:JoiningAnOrganization will then look as follows: :eventX_pre { :John eso:notEmployedAt :Ford :eventX_post :John eso:employedAt :Ford :John eso:hasFunction :new CEO :John eso:isEmployed : 'true' } Instantiation of events that express a change in a scalar value By default, situation assertions will only fire if some instance for an ESO role is found by the SRL module.
in total, 268 events were found with inferred pre and post situations and 47 events with inferred during situations.
neutral
train_94985
Frames that denote finegrained semantic distinctions are often grouped into one class in ESO since these distinctions do not influence the modeling of a salient set of pre and post situations.
lexical structures do not make explicit how the meaning of a verb needs to be combined with other event components, such as the participants and the temporal properties for the purpose of semantic parsing.
neutral
train_94986
In order to guarantee a fair comparison with UT-based approaches, we only consider triples assigning word classes (subclasses of olia:MorphosyntacticCategory, roughly corresponding to UD POS tags), but ignore triples with olia:MorphosyntacticFeature assignments (roughly corresponding to morphosyntatic features in UD).
our evaluation is biased towards features from the German annotation as we use the German-based MLG gold annotations for bootstrapping the annotation scheme.
neutral
train_94987
For w i , we can expect that this will be true for more than one category.
for each of them we find three columns: "Ok" for the number of correct cases, "Total" for the total number of cases and "P" for the precision as the ratio between both values.
neutral
train_94988
We used the text of Spanish Wikipedia as a corpus, not because we have a particular interest on it, but because it is large and freely available.
of the application of this first step, the CPA ontology, which initially contained a set of 250 semantic types, has grown to a total of 2,290 categories, each one representing a parent node populated with hyponyms.
neutral
train_94989
The POS tagging of the GOLD standard is relevant in two ways for this project: on one hand, we expect POS tags to provide useful information for the (manual or partly automatic) segmentation process.
ultimately, segmentation with 0.3 seconds pauses as a segment boundary delivered the best results for learners' speech.
neutral
train_94990
All these phenomena are marked with dummy placeholders in the normalization process and are therefore easy to filter out in the training process and easily tagged in a post-processing.
we expect the performance of POS taggers to improve once the data has been segmented into units comparable to the sentences of written language (see above).
neutral
train_94991
AMIRA uses punctuation as an indicator for a new token so replies, mentions, retweets and hashtags in tweets are broken into the indicator part (@ for replies, mentions and retweets and # for hashtags) and the remainder of them.
there is a need for a POS tagger which should take into consideration the characteristics of Arabic tweets and yield acceptable results.
neutral
train_94992
These findings hint at the need of further developing taggers possibly by extending the feature space (e.g., by morphological, syntactic or even semantic features).
extracting lemmas and syntactic words (including all grammatical categories available) from a Wiktionary instance in a thorough and robust way is not a trivial task.
neutral
train_94993
We presented TLT-CRF as a hybrid morphological tagger for Latin which employs lexicon-based features as well as hand-crafted rules in the framework of 1st order CRFs.
beyond Part of Speech (PoS), TLT-CRF tags eight inflectional categories of verbs, adjectives or nouns.
neutral
train_94994
16.7 tokens per lexicon entry.
it uses the lexicon-as-features rather than the lexicon-as-constraint approach.
neutral
train_94995
it uses the lexicon-as-features rather than the lexicon-as-constraint approach.
during the split of the corpus we did not shuffle sentences, but documents, thereby obtaining a more realistic split of the data.
neutral
train_94996
We test our tagger on Slovene data, obtaining a 25% error reduction of the best previous results both on known and unknown words.
to most other taggers of highly inflected languages, tagging of unknown words is directly incorporated into the tagger architecture, rather than being handled by an additional process.
neutral
train_94997
For example, we can understand the word mới in accordance with two meanings shown in rows 1 and 2 of Table 2.
the quantifier indicating the whole will be annotated with a Pp tag if it is head word of a noun phrase.
neutral
train_94998
The example is given in row 3 of Table 1.
we then discuss difficulties in wS, POS tagging, and bracketing, and how we solve them in developing the annotation guideline in Section 3, 4, and 5 respectively.
neutral
train_94999
For example, khó_khăn can be understood as difficulty or difficult.
(2009) classified the words on the basis of their combination ability and syntactic function.
neutral