id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_100300
Our training data was extracted from the Romanian Speech Synthesis Corpus (RSS) (Stan et al., 2011) and it is comprised of a small number of words (8K).
this reduced our model size about 5 times.
neutral
train_100301
Another issue was the slight discrepancy between the set of labels from the manual annotation and the models.
finally, for evaluation purposes, a gold standard "corpus" was required, in which dependencies relationships containing a polarized words and a contextual valence shifter have been annotated.
neutral
train_100302
AP(<ADJ:loin de>,<ADV:>)) and often impact a phrase or a whole sentence, not directly a lexical item.
since, there was no such corpus available, we randomly selected 500 sentences from the whole NO-3 http://fr.nomao.com/ MAO corpus and discarded them from this corpus, that was therefore considered as the training corpus.
neutral
train_100303
Limited to noun phrases, (Campbell et al., 1999) applies pattern-based rules and combines them with UMLS concepts to acquire new and semantically classified terminology.
semantic classifications such as pathological information are particularly missing so far.
neutral
train_100304
If sentences are misclassified as 'pathological' (although they describe non-pathological findings), this is a minor issue.
for learning, we adapt the standard probabilistic CKY parsing algorithm.
neutral
train_100305
It should be noted that using previous and following morphs in English is not very beneficial due to the simple morphology of the language.
furthermore, morphs that are attached to an unvoiced consonant ending word are also harmonised and the first morph letters become also an unvoiced consonant (i.e.
neutral
train_100306
Furthermore, the semantic class that a preposition can take is language specific.
the improvement given by multilingual evidence is rather small.
neutral
train_100307
For example, the article for the entity Turing that refers to the "English computer scientist" has the unique identifier ALAN TURING, whereas the article on Turing with the "stream cipher" meaning has the unique identifier TURING (CIPHER).
the pairwise Pearson correlation between the two annotators was measured at 0.77 and 0.83 for English and Spanish respectively, which represents a high agreement.
neutral
train_100308
The corpus includes misspellings from young children as well as extremely poor spellers subject to spelling tests way beyond their ability.
he defines the problem as 'a single word w and a large set of words W , quickly deciding which of the words in W most closely resembles w measured by some metric of similarity, such as minimum edit distance' and points out that finding the closest match between w and a large list of words, is an extremely demanding task.
neutral
train_100309
from native language identification (see among the others Koppel et al.
for what concerns morpho-syntactic features such as Part-of-Speech distribution, literary texts show a higher occurrence of pronouns and verbs, two features which are more common in conversation than in written language varieties (Biber and Conrad, 2009).
neutral
train_100310
For what concerns morpho-syntactic features such as Part-of-Speech distribution, literary texts show a higher occurrence of pronouns and verbs, two features which are more common in conversation than in written language varieties (Biber and Conrad, 2009).
achieved results are evaluated in terms of overall accuracy, Precision, Recall and F-measure.
neutral
train_100311
Letλ β be the maximizer of the expected F β measureF β .
we used 400 of the samples for training and 200 for testing.
neutral
train_100312
The averaged perceptron approach takes, instead of the final value of the parameter vector θ, its averageθ over all the iterations.
closest to our application is the application of mined morphological paradigms in (Durrett and DeNero, 2013), the morphological unit segmentation in (chang and chang, 2012) and the Finnish morphological generation for machine translation in (clifton and Sarkar, 2010).
neutral
train_100313
An example is that of the verb a putea (to be able to), whose stem vowel u transforms into o and oa, forming a singleton alternation pattern.
the validation scores are computed using tenfold cross-validation over the training set, and the best hyperparameters, in terms of word-level accuracy, for each learning method, are presented in table 4.
neutral
train_100314
This result suggests that using a reduced tag set yields a more informative performance estimation for some specific task because irrelevant tagging errors are not taken into account, but it does not necessarily lead to increased performance Table 4: Cross-genre F 1 performance of the tagging models; main diagonal represents performance using 10-fold cross-validation for the relevant original tags.
when tackling a specific syntactic phenomenon, only a subset of signs and tags may be involved.
neutral
train_100315
Excellent performance (F 1 > 95%) is noted for the complementiser that and whsigns such as who, when or which.
signs bounding subordinate constituents are often not matched pairs.
neutral
train_100316
1 Acronyms are formed to speed up and ease communication, mainly to create words for concepts frequently used or dif-ficult to describe.
the third column summarises how many news articles need to be analysed to find a new (i.e.
neutral
train_100317
The last column provides the ratio between the number of LFs for the same SF, considering all SFs.
not much work exists for languages other than English.
neutral
train_100318
In this method, all the nodes of the path (the path from event nodes to their common parent) and the ones among this path would be designated as the desired portion of tree.
the task of temporal relation classification in Persian is more complicated than in other languages such as English.
neutral
train_100319
In original SSK, an alphabet letter is assumed as a comparing unit that can be expanded to sub-string by increasing the K value.
high G-ExcEva of the signal class is an evidence of this improvement.
neutral
train_100320
SSK Kernel: SSK was initially proposed for estimating a similarity measure between sequences (Lodhi et.
a simple mapping method that relates a node label to an individual ASCII character can be used for the SSK adaptation.
neutral
train_100321
We argue that applying a modelling approach originally designed for lexicons to a wider range of language phenomena brings a new perspective to the relationship between theory-based and empirically-based approaches to language processing.
this fragment defines a node called DOG with two path equations, specifying that the (syntactic) category is noun, and the (morphological) form is dog.
neutral
train_100322
It defines <pos> to be the value "<table "<prev pos>" "<prev prev pos>" >".
this definition presupposes a feature <vstart> which returns true for words that start with a vowel, false otherwise 3 .
neutral
train_100323
The user was encouraged to reflect on why the system might have misidentified his introduction.
the primary concern of our system is to promote self-regulated learning, self-knowledge, and metacognition.
neutral
train_100324
Each phrase generalization is based on up to 12 string comparisons, taking an average size of phrase as 5 words.
hence we define the operation of generalization on a pair of PT as finding the maximal common sub-thickets based on generalizing phrases from two paragraphs of text.
neutral
train_100325
Taking into account a separate generalization of noun and verb phrases, this average case consists of 2* 45*45 generalizations, followed by the subsumption checks.
of generalizations of two RST relations of the same sort (evidence) we obtain Iran nuclear NNP -RST-evidencefabricate by USA.
neutral
train_100326
As a candidate generator we use a variant of the algorithm presented in (Gerdjikov et al., 2013).
we can use a gradient method to find a local extremum of F .
neutral
train_100327
Polanyi and Zaenen (2006) investigate the usage of contextual valence shifters and discourse connectives inside a text.
there are some open issues related to this research area, commonly known as Opinion Mining, which can be summarized as follows: (1) Opinions are potentially ambiguous, and (2) Contextual interpretation of polarity is hard to achieve.
neutral
train_100328
For example, there are plenty of instances where beardedness or wearing glasses are distinguishing attributes by themselves, but this is not the case for most other attributes (e.g., wearing a tie).
as a novelty, we incorporate features of the role of the underlying task, object identification, into these property specifications; these features are inherently domain-independent.
neutral
train_100329
Thus, we assume that people not only choose attributes on the basis of some intrinsic properties, such as salience, but also on the basis of their contribution to the identification of the intended referent.
we first discuss previous work, then we motivate our approach.
neutral
train_100330
Choices in this procedure can be made more specific by the corpus frequencies, thus incorporating some element of the majority of approaches to the GRE challenge (such as (Bohnet 2008) and (Kelleher, McNamee 2008)).
as a novelty, we incorporate features of the role of the underlying task, object identification, into these property specifications; these features are inherently domain-independent.
neutral
train_100331
The multilingual lexicon was then compared against the existing WordNets: PWN (Fellbaum, 1998) for English; BalkaNet (Tufis, 2000) for Czech, Romanian and Bulgarian.
further submeanings were also defined but for the purposes of this exercise we assumed them to be part of the main meanings.
neutral
train_100332
We used the Most Frequent Sense (MFS) as the first baseline for this comparison.
on occasions, a combination of translations would correspond to more than one sense of the word.
neutral
train_100333
We therefore use an existing multilingual lexical resource (Lefever, 2009) to develop a large, artificial parallel corpus containing semantically disambiguated polysemous words, and use it to calculate the maximum contribution that parallel corpora can make towards WPSD under ideal conditions, when all other processing steps are 100% accurate and therefore do not introduce any noise to the process.
the quality of proto-synsets is only as good as the quality of word alignment, not worse.
neutral
train_100334
For the best combination of kernels, they obtained F-measure equal to 77.62%.
we start from the first AgP 0 head to the left, then we proceed to the right, jumping from AgP i head to the closest AgP i+1 head to the right.
neutral
train_100335
Each of written operators refers to one semantic relation.
one of them is presented in Listing 1.
neutral
train_100336
This type of analysis, outputting stems and appended morphemes aims to identify some kind of border between the different morphemes.
there is no need for length adjustment to these ratios since we are considering only three letter roots.
neutral
train_100337
Given this degree of error in tagging, a parser trained with similar noise in POS tags may outperform one which is trained on gold tags.
16 000 sen-tences from weblogs, newsgroups, emails, reviews, and question-answers.
neutral
train_100338
Next, we compared the effect of training and testing within sub-genres and saw that features such as sentence length have a strong effect.
note that the difference in the number of words in the training set across these three methods is minimal: they vary only by 41 words.
neutral
train_100339
The raw sentence-internal co-occurrence counts from the original matrix have been transformed into Local Mutual Information scores (Baroni and Zamparelli, 2010;Evert, 2005).
baroni and Zamparelli (2010) suggest treating adjectives as distributional functions that map between semantic vectors representing nouns to ones representing AN combinations.
neutral
train_100340
As an example, "Peter drives a red" can be analyzed as Su bj Det Ad j Obja using structural prediction.
since these transformations all work independently, they have been parallelized in the same manner as the initialization.
neutral
train_100341
Each time a new word w is pushed to jwcdg, MaltPredictor forwards w together with its PoS-tag onto MaltParsers input queue and runs MaltParser's algorithm until a shift operations occurs.
since the PoS tags are needed and a PoS tagger needs lookahead as well to achieve good accuracy, a lookahead of at least three words is needed for the whole taggerparser pipeline to achieve a high accuracy.
neutral
train_100342
From semantic representations to surface phonologic representations, seven different levels of linguistic representation are supposed for each set of synonymous utterances.
both of them are based on directed labelled graph structures, and some research has been done towards using them to represent dependency structures and knowledge of the ECD (OWL in (Lefrançois and Gandon, 2011;boguslavsky, 2011), CGs at the conceptual level in (bohnet and Wanner, 2010)).
neutral
train_100343
The set of unit node markers is denoted M ∩ and is the powerset 5 of M. If a unit node is marked by ∅, it is said to be generic, and the represented unit is unknown.
for a specific Lexical Unit L, (Mel'čuk, 2004, p.5) distinguishes considering L in language (i.e., in the lexicon), or in speech (i.e., in an utterance).
neutral
train_100344
We use a slot filling system that has achieved highly competitive results (ranked top 2) at the KBP2012 evaluation as our baseline.
the effectiveness of this model has been demonstrated in the challenging Knowledge Base Population Slot Filling task, where a weighted voting system achieves 2.3% absolute improvement in F-measure score based on the confidence estimates.
neutral
train_100345
Two annotators are trained to perform the annotation.
when parsing the context sentence, we replace each citation content with a <CITATION> symbol, in order to remove the contextual bias.
neutral
train_100346
So, the best result (97.90 of BLEU and 0.0114 of TER) could be considered as an upper bound for proposed approach.
similar approaches are used by Oflazer (2008) for English to Turkish sMT, Luong et al.
neutral
train_100347
Our proposed approach predicts each morphological feature independently.
morphological analysis is an important phase in the translation from or into such languages, because it reduces the sparseness of model.
neutral
train_100348
it is positive in the food domain while it is negative in the negative domain), contributing to the need for domain adaptation.
in the all-in-one classifier experiments, the sentiment classifier is trained with 26 domain datasets while testing it with the 27 th domain.
neutral
train_100349
(2007), the all-in-one exceeds the ensemble only in the DVD domain.
creating three versions of each feature in both the source and the target domains grows the feature space exponentially, which is prohibitive in a many-domain adaptation scenario such as ours which consists of a total of 27 domains.
neutral
train_100350
We focus on the manual rule creation approach and discuss to what extent the existing tools can be used for this task.
we present a language independent formalism for rule creation and execution called wCCL Relation.
neutral
train_100351
As long as there is a continuous sequence of EX-PAND relations linking the subsequent sentences, citations in those sentences can fill the ARG2 slot for St SLink.
some example instances of these CON-TRAsT/CORROBORATE relations are provided in Figure 1, the document containing the citations (represented as we or this study) contrasts with [ We use square brackets and numbers (IEEE style) to represent citations that are the object of this study (Figure 1).
neutral
train_100352
As long as there is a continuous sequence of EX-PAND relations linking the subsequent sentences, citations in those sentences can fill the ARG2 slot for St SLink.
the non-initial use of roughly still means something like approximately, but the connection with the previous sentence is no longer there, e.g., The odds were roughly 8 to 1.
neutral
train_100353
However, they indeed play an important role in distinguishing some classes of verbs.
while "divide" and "open", which belong to unaccusative class, are clustered together into the object-drop class.
neutral
train_100354
In other word, considering whole document as the context window is more helpful than considering just whole sentence for reranking the candidate.
in order to detect and correct this kind of error, context analysis of the text is crucial.
neutral
train_100355
Let assume that w i is in j-th phrase of E, then, we can estimate | as follows: Equation 4is the same as phrasal translation model in phrasal SMT systems.
some statistical methods use Google Web 1T N-gram data set to detect and select the best correct word for a real-word error (Bassil & Alwani, 2012;Islam & Inkpen, 2009).
neutral
train_100356
We propose and evaluate several strategies for making use of these multiple inputs: (a) select one of the datasets, (b) select the best input for each sentence, and (c) synthesize an input for each sentence by fusing the available inputs.
this is a very poor strategy here; however, below we will see that it is quite reliable if we make a choice based on length ratio.
neutral
train_100357
The Stanford CoreNLP tool kit is used for tokenization, POS tagging (Toutanova et.
this paper proposed the automatic generation of Multiple Choice Questions(MCQs).
neutral
train_100358
The previous two stages (sentence selection and keyword selection) are not domain specific in nature i.e.
automatic CQG has received a lot of research attention recently.
neutral
train_100359
The existence of translationese has been discussed and more recently various methods (Koppel et al., 2011;Ilisei et al., 2010) for identifying translationese have been devised.
ìèìî, äîìîé, ïðîòèâ, íàïðîòèâ, îêîëî, äàëåêî, âèäíî, âîêðóã, ãîðàçäî, âîí, âåñåëî Observation: Sometimes the counter of the "the best" result and the counter of the "second best" result could have similar values but this was not the case for us.
neutral
train_100360
Then for each n/2 < i ≤ n we create a set F i by adding consecutively one more word found at position i in the entire set F .
in order to inspect our results we used a hierarchical clustering method and draw conclusions based on the most frequent result.
neutral
train_100361
Implementing just a few rules for fixing these, we achieved 97.84% token accuracy and 86.77% clause accuracy on the development set.
while a 10-million-word portion of their English newspaper corpus has less than 100,000 different word forms, a corpus of the same size for Finnish contains well over 800,000.
neutral
train_100362
Moreover, there are much more different possible morphosyntactic tags in the case of agglutinating languages (corresponding to the different possible inflected forms) than in English (several thousand vs. a few dozen).
one such case from Hungarian is briefly discussed in (orosz and Novák, 2012).
neutral
train_100363
We begin the analysis of the results by reporting the effectiveness of the subjectivity unigram classifiers in Table 1.
having built the unigram-based subjectivity and polarity classifiers in the first stage of the process, the sentence of each training document is classified in terms of its subjectivity and polarity.
neutral
train_100364
This decision contrasts with the way that sentence-based sentiment analysis is utilized by Mao and Lebanon (2006) and the experiments presented in section 5 indicate that it typically results in increased accuracy.
at training time, it requires labeled data at all levels of analysis, which is a significant practical drawback.
neutral
train_100365
A triple x, y, w from a KS K i suggests linking x to synsets including y.
the evaluation is based on wordnet reconstruction task proposed in : randomly selected words are removed from a wordnet and next the expansion algorithm is applied to reattach them.
neutral
train_100366
There is no perfect recipe that allows identifying all good and sufficient mappings from all bad and incomplete mappings in a language independent fashion, however, the mapper allows users to decide whether non-mapped segments at the beginning or the end of terms should be allowed or prohibited.
given two lists of terms (in two different languages) the task of the term mapping system is to identify which terms from the source language contain translation equivalents in the target language.
neutral
train_100367
Previous research suggests several possible vector representations for documents.
this might not be the case for cross-domain graphs.
neutral
train_100368
The baseline stands for the performance of a linear-kernel SVM classifier trained on the source data.
in the field of sentiment analysis GB models have been employed for sentiment classification (Pang and Lee, 2004;Goldberg and Zhu, 2006;Wu et al., 2009), automatic building of sentiment lexicons (Hassan and Radev, 2010;Xu et al., 2010), cross-lingual sentiment analysis (Scheible et al., 2010) and social media analysis (Speriosu et al., 2011).
neutral
train_100369
They proposed to reorder VS construction into SV order for SMT word alignment only.
table 3 compares the results, in term of BLEU scores, of the three experimental settings in 3 evaluations schemes, as follows: (a) Standard, which includes performing recasing and removing white space before punctuation, (b) Nopunct, in which punctuation is stripped and evaluation is performed on the lexical text only, and (c) Nopunctcase in which, in addition to removing punctuation, all words are lower-cased.
neutral
train_100370
One the one hand, developing an Arabic-French machine translation system is not an easy task, although there is a vast amount of training data nowadays.
in this paper, we describe an ongoing effort to build our first Arabic-French phrasebased machine translation system using the Moses decoder among other linguistic tools.
neutral
train_100371
This grammatically complete sentence carries a conjunction w, a future particle s, a verbal token yktbwn, and a feminine singular third person object pronoun hA.
dealing with the complexity and ambiguity of the source language plays a major role in boosting the efficiency of the translation system.
neutral
train_100372
These adhoc models differ significantly from the models of how translation happens that are used during actual translation, which violates the basic machine learning assumption that the same model should be used during training and testing.
the LItG is then transformed into a full ItG whose parameters are again tuned to the training corpus.
neutral
train_100373
Theoretically, proper names do not occur in a hypernym relation, since whether or not they would be considered correct is highly dependent on the context of the document: the hypernym pair (priest, John) can be correct in a text where John is in fact a priest, but if another non-priest John is referred to this pair would be incorrect.
correct hypernymhyponym pairs not occurring in the same clus-ters are erroneously eliminated by the filtering module.
neutral
train_100374
To measure relatedness, most of the lexicon-based approaches rely on the structure of the lexicon, such as the semantic link path, depth (Leacock and Chodorow, 1998;Wu and Palmer, 1994), direction (Hirst and St-Onge, 1998), or type (Tsatsaronis et al., 2010).
here again, the results are reported in terms of accuracy.
neutral
train_100375
The search algorithm will be executed as many times as necessary, so that all positive examples are covered, i.e., the set of positive examples is the empty set.
hence we are convinced that it is admissible to investigate the possibility of generalizing this information using a classification algorithm.
neutral
train_100376
<myocardial infarction, heart attack>, <emesis, vomiting> etc.
"your doctor would call it") have the tendency to remain isolated.
neutral
train_100377
This study is certainly more accurate wrt previous similar works that use few manually defined keywords (Althouse et al., 2011), such as flu and influenza.
2 "For the past 3 days I have had a stuffy, runny nose, congested chest, fever, sore ears and throat and burning eyes.
neutral
train_100378
Systematic keyword analysis has shown that being able to trace both technical and naïve terminology produces a much larger body of evidence.
first, the algorithm depends on the availability of critical resources: web query logs are a kind of data which is not freely available.
neutral
train_100379
without any weight).
we also provide in this article two distinct ways to combine them according to different purposes.
neutral
train_100380
Unlike , our CONJ chunk only contains the conjunction token(s) and, as opposed to Paroubek et al.
every discrete HMM can be "transformed" into a CRF model defining exactly the same probability distribution (Sutton and McCallum, 2006;Tellier and Tommasi, 2011).
neutral
train_100381
Future linguistic analysis will clarify the unexpected behavior of A61K.
definitions of "sublanguage" vary, but have some commonalities.
neutral
train_100382
This could include indexing by semantic classes of named entities relevant to the domain of the patent, relationships between semantic classes of named entities, and the like.
genre and domain differences were measured by calculating the average sentence length, the type-token ratio and the hapax ratio.
neutral
train_100383
The BNRC has no tendency towards closure at all.
the experiments require two bodies of data: the collection of data that is being examined for fit to the sublanguage model, and a "background" corpus consisting of material in the general (i.e.
neutral
train_100384
Computing BLEU scores over character sequences does not make much sense, especially for small n-gram sizes (usually, n ≤ 4).
pivot languages can also be used for paraphrasing and lexical adaptation (Bannard and Callison-Burch, 2005;Crego et al., 2010).
neutral
train_100385
One possibility is to just apply the models tuned for the individual translation tasks, which is suboptimal.
in future work, we plan to try the more sophisticated bitext combinations from (Nakov and Ng, 2009).
neutral
train_100386
We provided with a qualitative analysis of some of the highest ranking conjunctions and they appear to be strong predictors for Location, that a domain expert would also consider adding to the model.
a larger l opt would mean that the model benefits from conjunctions between two low scoring atomic features, which is not the case, according to our results.
neutral
train_100387
1 The corpus comprises 1,808 summaries, produced by 452 summarisers, for four different dialogues (each summariser produced a summary for each dialogue).
abundant, such sources have the drawback of being quite generic, making it harder for the researcher to control different phenomena.
neutral
train_100388
Comparing different translation models containing MT useful data ranging from comparable, through strongly comparable, to parallel, we concluded that there is sufficient empirical evidence not to dismiss sentence pairs that are not fully parallel on the suspicion that because of the inherent noise they might be detrimental to the translation quality.
2010, addressed large-scale parallel sentence mining from Wikipedia.
neutral
train_100389
Many documents in one language are shortened or adapted translations 1 of documents from other (not always the same) languages and this property of Wikipedia together with its size makes it the ideal candidate of a strongly comparable corpus from which parallel sentences can be mined.
depending on the comparability level of the extraction corpus, the quantity of parallel data extracted may range from 0.1% (weakly comparable corpora) to 29% (strongly comparable corpora) of the entire corpus (Ion et al., 2011).
neutral
train_100390
In the following figure, we detail the three stages of the proposed approach.
this approach takes advantage of the LMF fine structure that highlights all kinds of relationships between entries' knowledge and distinguishes the role of each available text such as giving definitions and examples.
neutral
train_100391
Indeed, the example" ‫اَم‬ ‫َم‬ ِ ‫ا‬ ُ ‫َم‬ ‫ا‬ ‫َم‬ ‫او‬ ‫َم‬ ‫َم‬ ‫أ‬ -the boy takes the book" given in figure 6 has a syntactic behavior "verb subject object (VSO)".
it removes the longest suffix and prefix.
neutral
train_100392
Indeed, LMF highlights all kinds of relationships between entries knowledge and distinguishes the role of each available text such as giving definitions and ex-amples.
two kinds of anomalies can occur.
neutral
train_100393
Recently, data selection as a simple and effective way for this special task has attracted attention.
it needs a large size of the selected subset (more than 50% of general corpus) to obtain an ideal performance.
neutral
train_100394
The classification method has been run on a subset of sentences of Wall Street Journal (WSJ) annotated with the elementary tree of the XTAG grammar.
the annotation process has been done according to the treebank creation method introduced in Basirat and Faili (2013).
neutral
train_100395
The difficulties, raised in the manual creation of Treebanks, led the researchers to use automatic and semi-automatic methods of treebank development methods.
the combination of Adaboost and Random Forest has been used in the traffic flow (Leshem, and Ritov, 2007) and cancer survivability (thongkam et al, 2008) and improve the performance of them.
neutral
train_100396
In any case, the lexical network that is built is not free of errors which are corrected along their discovery.
this confidence is done by comparing the relation weight to a confidence threshold which is computed as the starting point of the long tail in the distribution of the relation.
neutral
train_100397
The concept of a mention is closely related to NPs in syntax.
such complex cases cannot be covered by a simple heuristic, but rather need to be defined via language dependent rules in order to be captured properly across languages.
neutral
train_100398
On average every third sentence contains at least one co-reference chain.
in this way, all information about the potential tags and the correct tag is represented in the form of a subtree, attached to the wordform.
neutral
train_100399
However, the tool is not freely available and is not open source.
as for Hungarian POS-tagging, hunpos was developed on the basis of hunmorph (Halácsy et al., 2006).
neutral