id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_93200
We introduce an effective batch updating algorithm, and present performance measurements on a subset of the Reuters newswire corpus that show that a 12.5% to 50% split of the documents into corpus and update vectors leads to a three to four fold speedup over a complete rebuild.
for deletions, we let denote index permutation to remove demoted terms e i i-th standard base We associate every update u = (i, v) ∈ U with two term frequency vectors.
neutral
train_93201
The models are compared with a baseline system and two versions of the PBA wherever relevant.
all possible adddelete rules are applied on a given word form and the resulting lemma is checked against the vocabulary to find if it is right or not.
neutral
train_93202
Morphological analysis is the task of analyzing the structure of morphemes in a word and is generally a prelude to further complex NLP tasks such as parsing, machine translation, semantic analysis etc.
evaluation is performed on individual attributes as well as on the combined output.
neutral
train_93203
This is an iterative process starting with the initial score, S 0 , calculated using frequency counts (as in section 4.1).
some languages (in particular Semitic languages) have another type of word formation in which morphemes combine in a non-concatenative manner, through the interdigitation of a root morpheme with an affix or pattern template.
neutral
train_93204
If their relation phrases and first arguments are matching, the mismatch of second arguments will be calculated.
this rule is not applied for "isA" (equivalent) relations.
neutral
train_93205
Word alignment approaches are generally used to construct bilingual lexicons automatically from parallel corpora.
the obtained results show that the inclusion in the training corpus of word alignment results integrating transliteration has improved the translation BLEU score from 20.15 to 20.63 (a gain of 0.48 points).
neutral
train_93206
This step uses the results of the transliteration into Latin script of all the proper names present in the Arabic corpus and can identify, for example, that the proper name "Kosovo" and the transliteration of the Arabic word " ‫"آ‬ ("kosoufou") are cognates.
in section 3, we present briefly the system for automatic transliteration of proper names from Arabic to Latin script.
neutral
train_93207
This approach has a strong limitation when used in word alignment as it proposes only one transliteration for a given name.
this step does not detect pairs of words such as "Algeria" and "aljazair" (transliteration of the Arabic word " ‫ا‬ ‫.
neutral
train_93208
Furthermore, we construct a weighted directed graph by matching the tree of the original sentence with semantic trees of each sense candidate.
(3) The basic principle is to count C 1,1 (the number of occurrences of w i and w j together), C 1,2 ( the number of occurrences of w i without w j ), C 2,1 (the number of occurrence of w j without w i ) and C 2,2 (the number of bigrams in the corpus that don't contains w i or w j ).
neutral
train_93209
Ideally, the POS tag of the parent would be a sufficient feature because reflexives in the second meaning should depend on a noun.
the specialized models have been integrated in the tectoMt system and extrinsically evaluated on the English-Czech test set for the WMt 2011translation task (Callison-Burch et al., 2011 this data set contains 3,003 English sentences with one Czech reference translation, out of which 430 contain at least one occurrence of it and 52 contain a reflexive pronoun.
neutral
train_93210
This method gives preferences to sub phrases according to their length and score.
w +r }: of the work in Le and Shimazu (2004) we choose such collocations that their lengths (including the target words) are less than or equal to 4, it means (l + r + 1) ≤ 4.
neutral
train_93211
and Processing Pipeline The CDEF data structure is defined based on widely used de-facto standards such as TEI (Vanhoutte, 2004), CES (Ide, 2000), and common interface format being developed under the context of ISO committee TC 37/SC 4 (Ide, 2009b).
this paper realizes interoperability between those two types of frameworks to mutually complement their disadvantages.
neutral
train_93212
Comparative experiments show that our approach could efficiently reduce feature dimensionality and enhance the final F 1 value.
experiments and analysis are arranged in Section 4.
neutral
train_93213
Natural language data is implicitly richly structured, and making use of that structure can be valuable in a wide variety of NLP tasks.
in future work, we plan to apply our algorithms to a wider range of tasks, and we will present an analysis of the properties of online learning algorithms over latent structures.
neutral
train_93214
For example, for the query term (bulb, valve), the proposed method uses only the translation "valve".
their work defines the context of word w in a certain sentence, as the translation of word w, in the corresponding translated sentence.
neutral
train_93215
6 In the first experiment we assume that the user wants to acquire all synonyms irrespectively of the difference in senses.
for example, in the automobile domain, the ambiguous Japanese word (bulb, valve) has the synonyms (bulb) or (valve), depending on the meaning intended by the user.
neutral
train_93216
Given a text snippet in which the ambiguous word occurs, their methods select the appropriate sense by finding an appropriate translation.
as a bilingual dictionary, we use a largesized Japanese-English dictionary with 1.6 million entries.
neutral
train_93217
Unlike the constituency systems, the system with higher-accuracy parses performs better.
this shows that tree kernels still require the regularity encoded in the lower-and higher-accuracy trees.
neutral
train_93218
The experiments show that, comparing to the baseline method which does not utilize sitelevel knowledge, our method can improve the extraction performance significantly.
we evaluate on two aspects where site-level knowledge takes effect, they are: 1) weak semiblock identification in page-level extraction; 2) AV pair extraction in page-level extraction.
neutral
train_93219
Disputed document Q is then compared to each author profile A k using a metric D(Q, A k ) and is attributed to that author for whom D is minimum.
based method we choose terms based on their term frequencies in the corpus.
neutral
train_93220
Concept of capitalization does not exist as such in Gujarati language.
for given query text Q and author profiles A k The distance function D depends on the method used and is defined separately for each method and so is true for the author profile A k .
neutral
train_93221
We describe our method in Section 3.
we evaluated our method in terms of accuracy on two real-world datasets (Blitzer et al., 2007;Maas et al., 2011) for a document-level sentiment classification task.
neutral
train_93222
They then classify a given review by referring to ratings given for the same product by other users who are similar to the user in question.
the datasets contain user/product and only product information.
neutral
train_93223
δA→C 0 1 2 3 4 5 6 7 8 9 10 11 • • • 0 1 2 3 4 4 4 4 5 6 7 8 δC→A 0 1 2 3 4 5 6 7 8 9 • • • 0 1 2 3 4 8 9 10 11 12 Figure 2: Mapping between text variations T G F -b e t a a c t s 0 0 0 0 0 0 0 0 0 0 0 0 0 0 T 0 1 1 1 1 1 1 1 1 1 1 1 1 1 G 0 1 2 2 2 2 2 2 2 2 2 2 2 2 F 0 1 2 3 3 3 3 3 3 3 3 3 3 3 -0 1 2 3 4 4 4 4 4 4 4 4 4 4 β 0 1 2 3 4 4 4 4 4 4 4 4 4 4 0 1 2 3 4 4 4 4 4 5 5 5 5 5 a 0 1 2 3 4 4 4 4 4 5 6 6 6 6 c 0 1 2 3 4 4 4 4 4 5 6 7 7 7 t 0 1 2 3 4 4 4 4 4 5 6 7 8 8 s 0 1 2 3 4 4 4 4 4 5 6 7 8 9 Figure 3: The LCS table for "TGF-β acts" and "TGF-beta acts".
the suffix comparison relies on the dictionary, D: if the two strings have matching suffixes in the end according to the dictionary, it returns the length of the suffixes, which is received by a and b in the modified algorithm.
neutral
train_93224
Nowadays, the widespread use of Unicode is one of the reasons of text variant, particularly when it comes to text processing, because many NLP tools, e.g., syntactic parsers, cannot handle Unicode characters properly.
while those annotations may serve their goals individually, further benefit, e.g., reuse, comparison, or aggregation, can be gained from interoperable use of them.
neutral
train_93225
Thus, NBC with ULM usually suffers from the issue of word usage diversity and polysemy, which can degrade the DC performance.
section 2 defines the notations and briefly reviews the related work.
neutral
train_93226
The process dimension of CIGs consists of four pillars: (1) activities to be executed; (2) the resources they use or consume; (3) the actors that execute them; (4) control flows and gates that temporally constrain activities.
the intuition behind these features is that certain types may correlate strongly with syntax (e.g., one would expect "resource" to annotate an object NP).
neutral
train_93227
Considering the cost of consistency both the ranking results in initial f 1 and the feedback from f 2 , we define the cost function caused by refining f 1 with f 2 in the (t + 1) th round iteration as where f 1 denotes the initial ranking scores of f 1 .
formally, given a set of candidate templates T S candidate = {T (1) , T (2) , ..., T (n) } ⊂ R m , let f k : T S candidate → R denote the ranking function on the k th feature, where f k ∈ f = {f 1 , f 2 , f 3 }.
neutral
train_93228
Our proposed method extracted accurate parallel fragments, which led to correct new phrases.
they extract sub-sentential parallel fragments by using a Log-Likelihood-Ratio (LLR) lexicon estimated on an external parallel corpus and a smoothing filter.
neutral
train_93229
Hewavitharana and Vogel (2011) propose a method that calculates both the inside and outside probabilities for fragments in a comparable sentence pair, and show that the context of the sentence helps fragment extraction.
the baseline system used the parallel corpus (680k sentences).
neutral
train_93230
Note that exact match criteria has a bias against (Munteanu and Marcu, 2006), because their method extacts subsentential fragments which are quite long.
the noise contained in the data extracted by "+sentences" and "Munteanu+, 2006" produced many noisy phrase pairs, which may decrease MT performance.
neutral
train_93231
Stacked generalization is a general method of using a meta-level model to combine base-level models to achieve higher accuracy.
most ensemble methods use a single base learning algorithm to produce homogeneous base learners, but there are also some methods which use multiple learning algorithms to produce heterogeneous learners.
neutral
train_93232
At the end of this step, 5 different systems are built based on 5 different training sets, called 5 ,..., 1 , = j SMT j .
this algorithm is introduced for combining multiple models, we focus our attention to utilize this algorithm in order to improve only a single SMT system.
neutral
train_93233
The general strategy is to propose a number of sets of biterminal rules and a place to segment them, estimate the posterior probability given these sets and commit to the best.
we introduce a minimalist, unsupervised learning model that induces relatively clean, compact phrasal translation lexicons by employing a novel Bayesian approach that attempts to find the maximum a posteriori (MAP) or minimum description length (MDL) model.
neutral
train_93234
The transduction grammar that fits the training data the best is the one where the start symbol rewrites to the full sentence pairs that it has to generate.
we directly evaluate the grammars that we induce.
neutral
train_93235
We can see that selecting features within individual feature types leads to better results compared to applying feature selection to the full set.
quality Estimation (qE) involves judging the correctness of a system output given an input without any output reference.
neutral
train_93236
For Persian-English translation model, weights are optimized using a set 1000 sentences randomly sampled from the parallel corpus while the English-Arabic translation model weights are optimized using a set of 500 sentences from the 2004 NIST MT evaluation test set (MT04).
we show positive results for Persian-Arabic SMT (from 0.4 to 3.1 BLEU on different direct training corpus sizes).
neutral
train_93237
e is the English pivot phrase that is common in both Persian-English translation model and English-Arabic translation model.
work on Persian SMT is limited to few studies.
neutral
train_93238
The English side is tokenized using Tree Tagger (Schmid, 1994).
recent research (Carpuat and Diab 2010, Bouamor et al., 2012 show that explicitly modeling MWEs in the SMT framework yields non-negligible gains depending on the integration method.
neutral
train_93239
In our application scenario, we plan to implement several different text manipulation methods (and combine them based on their confidence).
this manipulation affects the watermark bit contained in the passage and thus allows information encoding.
neutral
train_93240
this at the cost of relatively low precision (around 70% even when using the information about the correct word sense).
these results suggest that our methodology, making a change in 7-14% of the sentences with a precision of 90-98% is a very competitive alternative for precision-oriented linguistic steganography.
neutral
train_93241
A good overview and comparison of different statistical approaches is given in Farkas et al.
syntactic reordering looks e.g.
neutral
train_93242
With the help of the framework we preliminarily propose a sequence cutting strategy to enhance the current IME.
we annotate it with pinyin sequence using the method in (Yang et al., 2012b).
neutral
train_93243
Using these results, we conclude that the network predicted by our system that uses tree kernels performs well in terms of extracting an unweighted, undirected network from Alice in Wonderland.
following are some basic metrics used for this task: 1.
neutral
train_93244
The trade-off is that, we have to maintain a good enough local lexicon yet with extremely limited number of entries.
the training data is a 2.5tB Japanese Web page set.
neutral
train_93245
Since all the probabilities are estimated in a maximum likelihood way, we can simply update the probabilities in these models by accumu-lating frequencies to similar words/phrases.
furthermore, by connecting several Chinese characters together, the number of yielded words and frequently used phrases/idioms are in million level.
neutral
train_93246
We argue it is essential for the IME system to regularly update its compound lexicon to cover these new and hot words/phrases.
how to choice the context from largescale Web pages such that the context is optimized to be used in a mobile device oriented Japanese IME?
neutral
train_93247
In our corpus, we found that 96.6% of the total interactions are Linear Disjoint interactions.
in such an interaction, user interacts with the system with a topic in mind and a goal to achieve.
neutral
train_93248
A relevant work in this direction is by Mejer and Crammer (2012).
it is not extendable to edge correctness prediction.
neutral
train_93249
We have shown that our predictive model can select a source language -based on only monolingual features of the source and target languages -that improves tagger accuracy compared to choosing the single, best-overall source language.
creating annotated linguistic resources is expensive and time-consuming.
neutral
train_93250
4 summarizes the gazette sizes along with precision for NE types from both agriculture and mechanical engineering (IC engine parts) corpora.
utility and effectiveness of our method is evident from its ability to discover new named entities.
neutral
train_93251
The definition of M ED0 is extended to use a given set W (rather than a single word w) by taking the average of the M ED0 distance between g and each word in W : m We assume that a subroutine M ED(D, K, W, g) returns the MED distance M ED D,K (g, W ), as defined above.
createGazetteM ED clearly outperforms BASILISK for all other categories.
neutral
train_93252
All these research have showed us the very good capability of the neural language model in learning word representation.
as input, character features (unigram or bigram features) are fed as indices taken from a finite dictionary D. This dictionary D contains all the character features which appeared in the training data.
neutral
train_93253
The greater α is, the probability of producing more classes increases.
the details of our model are elaborated in Section 3.
neutral
train_93254
So far we have extended the potential function used in node cliques of a CRF to a non-linear DNN.
overall we found that the feature set we used is competitive with CRF results from earlier literature (Turian et al., 2010;Collobert et al., 2011).
neutral
train_93255
The task of this study is to predict the category of each article from its content (text).
one is (e) all data in 2005 newspapers are used for adaptation (Normal Case).
neutral
train_93256
Methods We test the methods described in Sections 2.2 and 2.3 (represented as 'Transfer' and 'Online,' respectively).
if the documents that users want to apply the systems to do not belong to the domain of the annotated corpora, the resulting accuracy tends to be unsatisfactory.
neutral
train_93257
Our future work is to apply model adaptation to structured learning.
it usually loses information about old samples (in our case, the original data).
neutral
train_93258
Choosing a suitable set of features and automatically obtaining values for features pose obstacles for these methods (Islam and Inkpen, 2008).
the human-and computer-generated scores are compared; the results are promising, but point to the need for further refinement.
neutral
train_93259
We applied the algorithm to 15 pairs of sentences written for the purpose of testing the approach.
they achieve improved results by examining word pairs instead of single words (Okazaki et al., 2003).
neutral
train_93260
Here, patterns are generated as a sequence of typed dependencies and lemmas of tokens on the shortest path between two entities in a parse.
firstly, it does not necessarily produce patterns at the desired granularity.
neutral
train_93261
We therefore implement this option mainly for evaluation purposes, in order to determine URE capabilities given a fine-grained, high quality type system for selectional restrictions.
unsupervised Relation Extraction (uRE) approaches do not require target relations to be pre-specified, and require no labeled training data (Rosenfeld and Feldman, 2007).
neutral
train_93262
In an analysis of the prevalence of coordinations in different corpora, we observed that long coordinations (those with at least three conjuncts) are more prevalent in patents than in other genres (average length of 4.6 in EPO vs 3.6 in other corpora).
we show that the restriction to coordinations has several additional benefits, such as improved extraction of multiword expressions, and the possibility to scale up previous efforts.
neutral
train_93263
Such trees differ in the position of the EVENT/RELATION nodes (at level 1 of the tree).
it should be noted that the above results do not indicate the final system accuracy: for this purpose, the next section shows the 3-fold crossvalidation results using cost-fact values that are (i) derived from the validation set (not included in the cross-validation data) and (ii) slightly different from those optimizing the plot in the figures.
neutral
train_93264
We also show that exploiting syntactic information of the sentence containing the relational expressions is not trivial.
results on two data sets-Machine reading and TimeBank-with 3-fold crossvalidation show that the combination of traditional feature vectors and the new structural features improves on the state of the art for inter-sentence TErE by about 20%, achieving a 30.2 F1 score on intersentence TErE alone, and 47.2 F1 for all TErE (inter and intra sentence combined).
neutral
train_93265
The method of Mauge et al.
the evaluation results are shown in table 7.
neutral
train_93266
The purity score is close to perfect, which means that the merged expressions are mostly regarded as synonyms.
especially, the precision of the extraction models is improved by 7.9%.
neutral
train_93267
model is used to structure unstructured pages in the category.
we obtain a vector of attributes that can be regarded as synonyms, such as (region, country), and (grape, grape variety) for the wine category.
neutral
train_93268
An evaluation was conducted for each component, that is, evaluation of the KB (Section 5.2), evaluation of the automatically annotated corpora (Section 5.3), and evaluation of the extraction models (Section 5.4).
we employ a simple rule-based approach to induce the KBs.
neutral
train_93269
This is because LMUC uses w sing to penalize these four responses for identifying wrong singleton clusters.
case 3: |c i j | = 1 In this case, K i and S j have one mention in common.
neutral
train_93270
Despite the large differences in these responses, B 3 only gives 0.7% more points to response (e) than response (a).
here, response (e) is assigned a higher score by LCEAF(W 3 ) than response (a): response (a) is heavily penalized because of the many erroneous clusters it contains.
neutral
train_93271
As expected, LMetrics(W 4 ) and LMetrics(W 5 ) assign much lower scores to response (a) than the OMetrics, owing to a relatively small value of w pro .
w L c returns w sing if all of C i j , K i and S j contain exactly one mention (which implies that the singleton cluster C i j is correctly identified); otherwise, w L c returns 0.
neutral
train_93272
One of MUC's shortcoming is that it fails to reward successful identification of singleton clusters.
the same is no longer true in a linguistically aware setting: since the links may not necessarily have the same weight, the weight assigned to C i j depends on which |C i j | − 1 links are chosen.
neutral
train_93273
We, then evaluated the performance of these corpora on the Libya dataset.
at the end of this step we have list of documents (denoted by D lm ) in which both the words co-occur along with their frequency in the documents.
neutral
train_93274
Wikipedia Simple is a relatively new version of the Wikipedia English, where articles are written in simple English 7 .
the Flesch-Kincaid formula (Flesch, 1948;Kincaid et al., 1975) was probably the first in gaining wide recognition among publishers.
neutral
train_93275
MAP(ˆ | r 0 ), where we want to obtain the most plausible linguistic instantiationˆ given a certain readability level r 0 .
content producers might be tempted to adapt their manuscripts by tweaking the text features present in readability formulas, without gaining (or even degrading) real readability (Davison and Kantor, 1982).
neutral
train_93276
needs to be tagged as containing a cause relation.
to find the most prevalent discourse relations for opinion summarization, we have used the tAC 2008 opinion summarization track input document set (collection) which is a subset of BLOG06 and the answer nuggets provided by tAC 2008 as the reference summary (or model summaries), which had been created to evaluate participants' summaries at the tAC 2008 opinion summarization track.
neutral
train_93277
As a baseline, we used the original candidate sentences generated by MEAD.
we have complemented this parser with three other approaches: (Jindal and Liu, 2006)'s approach is used to identify intra-sentence comparison relations; we have designed a tagger based on (Fei et al., 2008)'s approach to identify topicopinion relations; and we have proposed a new approach to tag attributive relations (Mithun, 2012).
neutral
train_93278
(2008) and have produced better human agreement in annotation experiments than generic sentence fusion (Daumé III and Marcu, 2004).
9 We therefore use two baselines for this evaluation.
neutral
train_93279
There are hundreds of particles in the Korean language, but many of these are not used often, e.g., 9 particles cover 70% of particle use in a data set of thesis abstracts and 32 cover 95% in a study by Kang (2002).
due to the somewhat ambiguous nature of particles with respect to other morphemes and root endings, we cannot be certain that all of these edits are in fact particles, but can be confident that a majority are.
neutral
train_93280
All particles (erroneous or correct) are labeled as to their function (e.g., locative), allowing us to group particles into categories, to see how classifier performance differs.
this classifier can provide useful feedback to learners, especially higher-level ones who may know the correct particle once its omission is highlighted.
neutral
train_93281
Lexical error (lex) shows 69% precision and 66% Fmeasure.
czech is a West Slavic language that belongs to the Indo-European language family.
neutral
train_93282
(2005) report slightly above 80% accuracy (with all features combined) compared to 20% baseline for 5-class classification.
at the same time, for other errors, it is impossible to identify their nature and group prevalence based on the data available.
neutral
train_93283
If an essay was written reasonably well the participants assumed that the author belonged to the IE group of learners, and vice versa.
we observe that some of the best performing errors occur when a learner's L1 interferes and affects the production of L2.
neutral
train_93284
Further, we would like to perform experiments with larger sets of data and to compare the performance of features for other levels of proficiency.
the results provide empirical evidence for the different types of errors discussed within the Error Analysis approach.
neutral
train_93285
This reduces the number of required dot-products into the sum of the number of classes and the words in a class.
the number of target words must be limited.
neutral
train_93286
Our experiments on the WMT'14 English to French translation task show that this method provides a substantial improvement of up to 2.8 BLEU points over an equivalent NMT system that does not use this technique.
on the target side, we insert a positional token p d after every word.
neutral
train_93287
Previous work on VDR-based image description has relied on training data from expert human annotators, which is expensive and difficult to scale to other data sets.
our approach is to find the objects mentioned in a given description using a state-of-the-art object detector, and to use successful detections to produce training data.
neutral
train_93288
This could be attributed to the shift in the types of scenes depicted in each data set.
it contains a wide variety of subject matter drawn from the original 20 PASCAL Detection classes.
neutral
train_93289
QA is another representative sentence matching problem.
if we map the representations of TEXTCHUNK S 1 into an appropriate space then we can hope that similarity between these transformed representations of S 1 and the representations of TEXTCHUNK S 2 do yield useful features.
neutral
train_93290
(2013) propose a framework for taxonomically categorizing forum posts, leveraging manual annotations.
we find that around 90% of MOOC posts have only one aspect, which makes sentence-level aspect modeling inappropriate for our domain.
neutral
train_93291
We use the maximum value in the posterior for the distribution over topics for each post to obtain predictions for coarse aspect, fine aspect, and sentiment.
using the top three predictions made by PSL-Joint, we can understand the fine aspect of the post to a great extent.
neutral
train_93292
We provide results only on WS-353 and SCWS, since the above-mentioned approaches do not report their performance on other datasets.
the task provides a dataset comprising 79 graded word relations, 10 of which are used for training and the rest for test.
neutral
train_93293
(2012) applied K-means clustering to decompose word embeddings into multiple prototypes, each denoting a distinct meaning of the target word.
in order to alleviate this issue, we leveraged a state-of-the-art Word Sense Disambiguation (WSD) algorithm to automatically generate large amounts of sense-annotated corpora.
neutral
train_93294
As representatives for graph-based similarity techniques, we report results for the state-of-theart approach of Pilehvar et al.
these techniques either suffer from low coverage as they can only model word senses that occur in the parallel data, or require manual intervention for linking the obtained representations to an existing sense inventory.
neutral
train_93295
Figure 3 draws the envelope of histogram of cosine distance between all target-choice word pairs in the GRE test set, calculated in the embedding space learned with MCE.
we use antonym pairs in lexical resources to learn contrasting neighbors.
neutral
train_93296
Each GRE question has a target word and five candidate choices; the task is to identify among the choices the most contrasting word with regard to the given target word.
by definition, opposites are a subset of contrasting word pairs (refer to (Mohammad et al., 2013) for detailed discussions).
neutral
train_93297
It can be seen as the intrinsic dimensionality of a latent embedding of the elementary feature vectors.
a standard approach to control the capacity of a linear classifier is to use 1 or 2 regularization on the parameter vector.
neutral
train_93298
Therefore, syntagmatic models tend to favor the use of larger text regions as context.
the second category mainly captures paradigmatic relations, which relate words that occur with similar contexts but may not cooccur in the text.
neutral
train_93299
A low-rank decomposition is then conducted to learn the distributional word representations.
the objective function of PV-DM can no longer decomposed as the PDC model as shown in Equation 4and (5).
neutral