id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_19700
|
That is probably the reason for the much lower number of NE errors by the NTS systems trained and tested on the Wikipedia dataset.
|
the large number of NE errors made by the NTS models trained and tested on the Newsela dataset does not seem to have greatly influenced the overall performance of the NTS systems (see Table 6).
|
contrasting
|
train_19701
|
We acknowledge that more work is needed to make sequence-to-sequence models flexible enough for handling out-of-vocabulary words, especially in a cross-domain text simplification.
|
neural TS systems were still able to produce grammatical output and correctly model sentence splittings and sentence shortenings even across different text genres.
|
contrasting
|
train_19702
|
For entities, it is straightforward what this referential world is.
|
compared to entities, events are less tangible (Guarino, 1999; for various reasons: 1. we use a small vocabulary to name events, which results in large referential ambiguity 2. events are more open to interpretation and framing, which leads to more variation in making reference 3. events are less persistent in time than entities 4. each event has many idiosyncratic properties, e.g.
|
contrasting
|
train_19703
|
Our annotation is more restricted to the specific events of this database only.
|
it can be extended to other domains by defining a different event schema.
|
contrasting
|
train_19704
|
For example, the most common expressions for the event type Death can be detected by correctly identifying the WordNet synsets kill.v.01 (cause to die; put to death, usually intentionally or knowingly) and killing.n.02 (the act of terminating a life).
|
this is not the case for all expressions in the GVC.
|
contrasting
|
train_19705
|
Moreover, RDF has demonstrated a promising ability to support the creation of NLG benchmarks (Gardent et al., 2017).
|
english is the only language which has been widely targeted.
|
contrasting
|
train_19706
|
Hence, the determiner to be used in front of this predicate needs to be "o".
|
to dbo:birthDate ("data de nascimento"), the word "data" is feminine, thus the determiner must be "a".
|
contrasting
|
train_19707
|
To this end, we handle the ambiguity of possessive pronouns by interspersing the alternative forms, e.g., dele (eng:his) or dela (eng: her)" which agrees with the subject.
|
it is used just in case more than one subject exists in the same description.
|
contrasting
|
train_19708
|
Consider the following triple, (:Albert Einstein dbo:nationality :Áustria), its objectÁustria is a demonym and should be lexicalized as an adjective.
|
it is lexicalized as a noun because the part-of-speech recognized by RDF2PT considers only the label Austria, which is a noun, and does not consider the predicate nationality, which is an important part, thus decreasing the quality of the generated texts.
|
contrasting
|
train_19709
|
We looked at a large set of musical pieces, trying to map notes, or chords, to words, and find a good mapping, so that any musician would have to invest little effort in adapting to the mapping, starting from the music (s)he already knows.
|
defining a good mapping is not an easy task.
|
contrasting
|
train_19710
|
In Figure 1 we show parts of the current MIMEdh web pages for two cornets and a bassoonthe information displayed comes directly from the database and is not very engaging for a museum visitor.
|
figure 2 shows a mock-up of a potential visitor experience of a virtual web musem visit using texts generated by Methodius using the techniques we will describe below.
|
contrasting
|
train_19711
|
The extraction process also has to be generic, because of the large number of lexica it would be too cumbersome to craft a targeted extraction algorithm for each webpage.
|
the pages do not offer structural patterns which could allow for a reliable boilerplate and metadata extraction (Barbaresi, 2016).
|
contrasting
|
train_19712
|
Among others, the construction of the Penn Treebank (Marcus et al., 1993) made possible the existence of many statistical parsers trained on syntactic treebanks which perform at an accuracy of about 90% (Charniak and Johnson, 2005).
|
computational semantics is lagging behind in this respect, as most of the semantic annotated resources for English are rather scattered or simply nonexistent for the large majority of languages.
|
contrasting
|
train_19713
|
Furthermore, AMR tries to abstract away from both morphological and syntactic idiosyncrasies that account for several crosslingual differences.
|
there is still a problem with this approach, as it is strongly biased (by design) to annotate English sentences.
|
contrasting
|
train_19714
|
o Morphologically related terms can appear in a family labeled "Word family" colored in purple.
|
they can also be encoded elsewhere.
|
contrasting
|
train_19715
|
These distinctions would be much more difficult to account for in textual format.
|
it should not replace the textual resource altogether since other forms of information are better represented in textual format (definitions, contexts, etc.).
|
contrasting
|
train_19716
|
It avoids modeling very long sequences, as the character-based models do, by preserving trivial compositionality in consecutive alphabetical letters and digits.
|
the separation between letters, digits, and special tokens explicitly represented most of the idiomatic syntax of Bash we observed in the data: the sub-token based models effectively learn basic string manipulations (addition, deletion and replacement of substrings) and the semantics of Bash reserved tokens such as $, ", * , etc.
|
contrasting
|
train_19717
|
• BLSTM (Bidirectional LSTMs): Bidirectional LSTMs were first proposed by (Graves and Schmidhuber, 2005).
|
the underlying concept of bidirectional recurrent neural networks was proposed by (Schuster and Paliwal, 1997).
|
contrasting
|
train_19718
|
This could be a lack of generalization during training due to the reduced size of training data using the 128 h subset.
|
since both the LSTM and Chain B model perform better than the BLSTM, we neglect the BLSTM in the following experiments.
|
contrasting
|
train_19719
|
The longer the distance between two data points, the less related the data points are.
|
this is usually not the case for real multimedia signals including text, sound and music.
|
contrasting
|
train_19720
|
Experiments with different content of the acoustic training data indicate that careful selection of phonotactical data could be advantageous when developing a speech corpus of limited size.
|
due to the extra effort needed to collect such data and the possible loss in recording quality due to reading mistakes, a clear positive impact of special phonotactical data should be evident.
|
contrasting
|
train_19721
|
Speaker modeling is in fact important to dialog systems, and has been studied in traditional dialog research.
|
existing methods are usually based on hand-crafted statistics and ad hoc to a certain application (Lin and Walker, 2011).
|
contrasting
|
train_19722
|
The LSTM-RNN attends to speaker s i to obtain speaker vector s i while it is encoding current utterances.
|
such attention mechanisms bring little improvements (if any).
|
contrasting
|
train_19723
|
A plausible explanation is that training a hybrid model as a whole leads to optimization difficulty in our scenario; that simply interpolating well-trained models is efficient yet effective.
|
the hyperparameter g is sensitive and only yields high performance in the range (0.6, 0.9).
|
contrasting
|
train_19724
|
The training of an automatic speech recognition system is usually straight forward given a large annotated speech corpus for acoustic modeling, a phonetic lexicon, and a text corpus for the training of a language model.
|
in some use cases these resources are not available.
|
contrasting
|
train_19725
|
BLSTM-CTC AMs play an important role in end-to-end automatic speech recognition systems.
|
there is a lack of research in speaker adaptation algorithms for these models.
|
contrasting
|
train_19726
|
Regardless of which text prompts are selected, or how they are recorded, the outcome of this process is a set of text and audio files with corresponding contents.
|
before these files can be used to build a synthetic voice for MaryTTS, they have to be phonetically annotated.
|
contrasting
|
train_19727
|
Although such synthesis can sound rather buzzy and unnatural, these HMM-based voices offer higher flexibility and more consistent quality than unit-selection synthesis, as well as a much smaller memory footprint.
|
some drawbacks are (a) that building HMM-based voices for MaryTTS has a high technical overhead, and (b) that the Java port has become quite outdated, while HTS development has seen significant progress.
|
contrasting
|
train_19728
|
1 A fully unsupervised method of calculating the speech rate estimation based on syllable nuclei detection was presented by de Jong and Wempe 2009as a Praat (Boersma and van Heuven, 2001) script, though we have yet to test this method on the data of the ILMT-s2s corpus to verify whether similar reliability can be obtained.
|
analysis of the syllables per second may not provide any better results since an analysis of the Switchboard corpus data (Godfrey et al., 1992), which is a corpus of a collection of natural and spontaneous telephone conversations, by Greenberg (1999, p. 167) indicated the following: Although only 22% of the Switchboard lexicon is composed of monosyllabic forms, approximately 80% of the corpus tokens are just one syllable in length [.
|
contrasting
|
train_19729
|
For example, when the measurement of wpm is used, it is not possible to say if the median (Mdn) utterance speed for "Okay" at 156.9 wpm was really spoken at half the intentional speed of "Got you" at 372.7 wpm.
|
by comparing the utterance duration of the subject and the duration of the TTS output, both utterances can be indicated with a more reasonable median (Mdn) speech rate value and say that "Got you" at 42.6% was spoken with a slightly faster speech rate than "Okay" at 33.29%, but not twice as fast as the wpm value would lead one to believe.
|
contrasting
|
train_19730
|
We would expect this kind of information in a generic summary about the topic.
|
each facet should also branch off and discuss the most important symptoms for affirming or excluding a diagnosis in one branch, as well as different procedures, their advantages and disadvantages, and evidence for their effectiveness in other treatment-specific branches.
|
contrasting
|
train_19731
|
The baseline system generated the shortest compression because all arcs of the WG have the same weights.
|
this system analyzes neither the grammaticality nor the most used n-grams in the clusters.
|
contrasting
|
train_19732
|
Systems capable of summarizing live streams of heterogeneous content can be directly beneficial to users and even assist journalists during their daily work.
|
this new task also comes with new challenges.
|
contrasting
|
train_19733
|
We took a snapshot of this page that provided us with 16,246 unique live blogs.
|
the BBC website has no such live blog archive.
|
contrasting
|
train_19734
|
(1) can compute the TF across all tweets.
|
the IDF is limited due to only one document.
|
contrasting
|
train_19735
|
It confirms the efficiency of the model in summarizing short texts (Inouye and Kalita, 2011).
|
a large margin between the ROUGE scores of the hybrid model and the upper bounds (Table 2) suggest that its performance can be improved.
|
contrasting
|
train_19736
|
The delay can be just a few hours, days or sometimes the news does not appear in the regional languages at all.
|
some content gets generated and consumed in regional languages alone.
|
contrasting
|
train_19737
|
Like PEAK, PyrEval implements pyramid construction and automated scoring.
|
to PEAK, it can also score target summaries against a manually annotated pyramid that has been produced with the DUCView annotation tool.
|
contrasting
|
train_19738
|
For instance, if someone tells about the last time they baked a cake, it is likely that they do not mention the fact that the cake was put into the oven, because it is obvious that this event took place.
|
a text understanding system that does not have access to script knowledge will probably not be able to draw this inference.
|
contrasting
|
train_19739
|
The low performance on the Equality cases is mainly due to the fact that it subsumes the difficult Diathesis and Phrasal Verb relations.
|
the results for Reverse Entailment are better than one would expect: This is mostly due to the fact that most Reverse Entailment cases (approx.
|
contrasting
|
train_19740
|
Target: influence (B2) Candidates: determine (B1), impress (A2), change A1In 2, where the target is the verb "influence" (B2), "determine" (B1) passes the grammatical reformation stage but fails at the definition stage because it does not cover the semantics of "influence" in the meaning of affecting the way someone thinks or behaves, though it might do so in another context where determination causes a change in someone's way of thinking.
|
the word "impress" (A2) passes this stage because it can be used to mean to affect the way someone believes.
|
contrasting
|
train_19741
|
Figure 1 shows the T_Precision curve and Figure 2 shows the T_Recall curve when varying the value of n. It turned out that LM and W2V have about 80% and 90% of T_Precision within the top five candidates respectively, which means a large portion of the targets that are assigned any candidates of CEFR-LS have at least one correct candidate in the top five words.
|
it can be said that W2V outperforms LM in that it finds a correct answer more quickly (60% T_Precision for the top-ranked word).
|
contrasting
|
train_19742
|
So far the development and operation of the Language Grid have been focused on supporting language service developers and end users.
|
application developers in international NPO/NGOs continue to struggle with available language services in creating multi-language systems.
|
contrasting
|
train_19743
|
O&M further note the Orientation is the starting state and the Resolution is the ending state.
|
we have not yet found a correspondence for Rising and Falling Actions in L&W's framework.
|
contrasting
|
train_19744
|
Strictly speaking, the first half of the next sentence "one night I was cycling home …" can be seen as part of Orientation.
|
for consistency we do not allow one sentence to be broken into multiple parts with different annotation labels.
|
contrasting
|
train_19745
|
The studies reviewed above exemplify a rich research tradition using statistical analysis of wine review corpora.
|
there are few studies that have applied natural language processing (NLP) techniques to such data.
|
contrasting
|
train_19746
|
A figure like William Shakespeare is widely acknowledged as important by literary critics because his works are meaningful and there is a great deal to be said about them.
|
if we remove Shakespeare from the network, we can then explore the extent to which the network is altered: is Shakespeare such a focal point that it breaks apart or do the connections occur with a high enough frequency that its shape is unaffected?
|
contrasting
|
train_19747
|
It is tempting to blame those differences on typological differences between the languages and to speculate about the features that make the German language so well-suited for authorship attribution.
|
as the sampling experiments show, the performance of the attribution methods varies considerably with corpus composition.
|
contrasting
|
train_19748
|
In his experiments, an average price was $0.15 per argument.
|
our experiments with AMT were unsuccessful due the German-fluency requirements on the workers.
|
contrasting
|
train_19749
|
The regionalism péguer 'to stick' has a fair recognition rate of about 0.6, but it has the highest standard deviation, meaning that its recognition rate varies widely across the area.
|
the question about the number 80 yielded exceptionally low recognition scores, due to the particular way the question was asked.
|
contrasting
|
train_19750
|
With both automatic methods, we reached the desired accuracy threshold with comparable area sizes and number of variables (about 20).
|
the variables selected by the SVM classifier intuitively corresponded better to the variation patterns observed in the original survey data.
|
contrasting
|
train_19751
|
Several works have attempted to link gradable adjectives with numerical quantities that co-occur in the context of the gradable adjective mention (Shivade et al., 2016;Narisawa et al., 2013).
|
this dependence on corpus resources to find evidence for gradability requires complex information extraction and suffers greatly from sparsity, especially when attempting to ground adjectives in a new domain.
|
contrasting
|
train_19752
|
This confirms that, indeed, language intuitions are more robust for high-frequency adjectives.
|
this effect is only seen in the full model when standard deviation is known.
|
contrasting
|
train_19753
|
While the specifics of the rules are language dependent, there is potential to enable linguists to describe these rules to the model in order to improve transcription in the language documentation setting.
|
this relies on identifying tone groups since the tonal rules do not hold across the dividing lines between tone groups.
|
contrasting
|
train_19754
|
The quantitative results reported up to this point are based on training, validation and test sets randomly selected at the utterance level.
|
this means the training set gets a fair representation of utterances from all the narratives present in the test set.
|
contrasting
|
train_19755
|
The small size of the data set would encourage novel approaches for a MT model, as there is not enough data to use many machine learning techniques.
|
as no system yet exists, a MT system would assist in generating new texts in Choctaw from English.
|
contrasting
|
train_19756
|
Text, speech, and video include different degrees of personal information and thus identification of the individuals who produced the data can vary (Jokinen 2011).
|
given present-day techniques of data processing, individual characteristics may be retrieved easily.
|
contrasting
|
train_19757
|
Several efforts have extended them to cover other dialects (Jarrar et al., 2014;Zribi et al., 2014;Saadane and Habash, 2015;Turki et al., 2016;Khalifa et al., 2016).
|
they focused on specific dialects and often made ad hoc decisions.
|
contrasting
|
train_19758
|
These steps are shown in Figure 1.
|
the assumption of one-to-one mapping is too strong to induce the many-to-many translation pairs needed to off-set resource paucity because few such pairs can be found.
|
contrasting
|
train_19759
|
These facts mean that their corpus-derived features are based on a larger number of occurrences in Gigaword and may have less diversity in their contexts (and hence features).
|
the operational and relative part nouns had very low recall, despite acounting for an intermediate number of examples (by number of unique token types).
|
contrasting
|
train_19760
|
Previous approaches to cognate transliteration (Mulloni, 2007;Beinborn et al., 2013) suffer from the drawback that they require an existing list of cognates, which is infeasible for low-resource languages.
|
we automatically generate cognate tables by clustering words from existing lexical resources using a combination of similarity measures.
|
contrasting
|
train_19761
|
php?products_id=1184, ELRA-W0063 cover several specialized topics related to the overall domain.
|
when working with topic specific corpora, some general domain terminology might not be perceived as such since the corpus does not offer broad view of the subject.
|
contrasting
|
train_19762
|
As with the previous measure, idf scores were also mapped on a scale of 0 to 100.
|
in the case of idf, we reverse the score so that the most "interesting" GEL candidates for our study receive a higher idf.
|
contrasting
|
train_19763
|
Figure 4 indicates that the specificity scores are useful to identify terminologically interesting lexical items.
|
for our current goal, which is to identify GEL entries, the usefulness of this measure is mitigated by the fact that valid candidates are scattered throughout the score range.
|
contrasting
|
train_19764
|
Cucerzan (2007b) and Han and Zhao (2009) described other algorithms for NERL.
|
to most of these previous works, multilingual support is at the core of HED-WIG.
|
contrasting
|
train_19765
|
(2013) propose universal schemas, which are the mapping between NL surface forms to the KB predicates, by using matrix factorization.
|
all of them suffer from low coverage on relational phrases.
|
contrasting
|
train_19766
|
Let an NL dataset be a 3-tuple ( , , ), where is a set of NL patterns, is a set of entities, and is a set of NL triples.
|
for example, (Garnett, was born in, Mauldin) DBpedia (Auer et al., 2007) serves as our target KB.
|
contrasting
|
train_19767
|
These types of approach also often combine dictionaries with manually defined heuristic rules (Gerner et al., 2010;Kang et al., 2013).
|
rules are timeconsuming to implement and highly dependent on the task and domain.
|
contrasting
|
train_19768
|
Therefore, the corpus primarily consists of dialogues.
|
these are not real dialogues that have been recorded and transcribed 2 http://gcp.nict.go.jp/about/index.html but pseudo-dialogues written by scenario writers imagining possible situations.
|
contrasting
|
train_19769
|
First, the BLEU scores are significantly different for each language, ranging from 22.05 to 52.87 for translation from Japanese and from 23.39 to 58.13 for translation to Japanese.
|
the score tended to increase with an increasing number of training sentences.
|
contrasting
|
train_19770
|
We can directly construct translators between all possible language pairs from multilingual parallel corpora (i.e., direct translation).
|
if we do not have such parallel corpora, we use pivot translation, which involves translating source sentences into target sentences via a resourcerich language known as a pivot (Utiyama and Isahara, 2007;Cohn and Lapata, 2007).
|
contrasting
|
train_19771
|
Both the pivot and zero-shot translation generally assume that bilingual corpora covering the source language and those that cover the target language are obtained from the different texts.
|
comparative analysis is difficult to perform under this setting because the vocabulary differs and we cannot construct a direct translator.
|
contrasting
|
train_19772
|
With the pivot translations, the BLEU scores for most language pairs were worse than those for direct translation.
|
the score for the Ko → Zh pair improved; thus, we can conclude that the pivot translation can achieve quality close to that of direct translation.
|
contrasting
|
train_19773
|
"The park" in En2-2 are extra words of Ja-2.
|
the speaker also spoke "I've never heard Tetsugakudo Park."
|
contrasting
|
train_19774
|
Human translators tend to translate literally between languages of the same family, such as English and French.
|
with language pairs for which it is difficult to make literal translations, such as English and Japanese, professional translators elaborately generate context-dependent translations to make the translations natural and the meaning of the dialogue identical.
|
contrasting
|
train_19775
|
There have also been some attempts in developing corpora annotated for negation in other languages as demonstrated by CNESP (Chinese Negation and Speculation corpus) (Zou et al., 2015), which closely follows the annotation style of the BIOSCOPE corpus.
|
tailoring the annotation style to a specific domain leads these corpora to differ in what was annotated and how.
|
contrasting
|
train_19776
|
To our knowledge, there has not been any previous work on projecting negation across languages.
|
previous studies have experimented with projecting semantic annotations via word-alignment information extracted from large parallel corpora.
|
contrasting
|
train_19777
|
(= It is not the case that money is everything).
|
let us consider the following example annotated according to the original guidelines.
|
contrasting
|
train_19778
|
If the negated event is only in the matrix clause, subordinates are usually excluded from the scope of negation.
|
chinese allows for it-cleft constructions like the one in (18), where only the subordinate clause, which appears before the event of the main clause, is in the scope of negation.
|
contrasting
|
train_19779
|
These false negatives are caused by the fact that Chinese translates positive terms in English as negation (same as the cases in (32)∼(34)) but in some cases are just due to English words aligning to a null token.
|
in 16% of the cases we observed that the event from English is projected onto a completely different span of the sentence.
|
contrasting
|
train_19780
|
Hindi is the fourth-most spoken language in the world, and third-most spoken language along with Urdu (both are registers of the Hindustani language).
|
english is spoken by just around 125 million people in India, of which a very small fraction are native speakers.
|
contrasting
|
train_19781
|
Hence, there is immense potential for English-Hindi machine translation.
|
the parallel corpora available in the public domain is quite limited.
|
contrasting
|
train_19782
|
For English, we used true-cased representation for our experiments.
|
the parallel corpus being distributed is available in the original case.
|
contrasting
|
train_19783
|
The more of these data are available, the better the quality of the SMT system.
|
for some language pairs such as Persian-English, parallel sources of this kind are scarce.
|
contrasting
|
train_19784
|
Parallel corpora are an important part of a statistical machine translation system.
|
there is a lack of such data available for everyone.
|
contrasting
|
train_19785
|
First, using character n-grams of size 5, instead of using the default range of 3-6, does not significantly decrease the accuracy (except for Czech).
|
using a smaller number of character n-grams leads to faster training, especially when using the CBOW model.
|
contrasting
|
train_19786
|
The extractive summarization methods are typically unsupervised, for example Luhn (Luhn, 1958), Latent Se-mantic Analysis (Steinberger and Ježek, 2004), LexRank (Erkan and Radev, 2004), TextRank (Mihalcea and Tarau, 2004), SumBasic (Vanderwende et al., 2007) or KL-Sum (Haghighi and Vanderwende, 2009).
|
very good results in extractive summarization were achieved recently with recurrent neural networks Nallapati et al., 2016b;Nallapati et al., 2017).
|
contrasting
|
train_19787
|
Initially, five Czech news websites were selected to create the dataset: novinky.cz, lidovky.cz, denik.cz, idnes.cz, and ihned.cz.
|
during the cleanup of the data, we decided to drop ihned.cz from the dataset, because too many of its pages turned out to be just abridged versions of the actual articles with links to paid content.
|
contrasting
|
train_19788
|
This suggests that their trend for these features may not have followed the same pattern over time.
|
some of the authors composed work some time before or after James and Twain and extrapolation may have caused a drop in prediction accuracy.
|
contrasting
|
train_19789
|
The evaluated systems, however, failed to produce any entry points for those queries.
|
we are not aware of previous attempts at labeling cross-document coreference using the event hopper framework, which focuses on annotators' intuitions about event reference and allows event argument mismatches as well as event mention realis mismatches.
|
contrasting
|
train_19790
|
It is fairly easy to conclude that d4 follows from d1 and d2.
|
considering only d3 as the source, although related but d4 has entirely diverse information.
|
contrasting
|
train_19791
|
Several approaches have been proposed in the literature and the current best practice is to evaluate them on a subset of the Reuters Corpus Volume 2.
|
this subset covers only few languages (English, German, French and Spanish) and almost all published works focus on the the transfer between English and German.
|
contrasting
|
train_19792
|
Based on the objective being sought, researchers focused their attention on classifying questions based on their subject (Conner, 1927), the educational objective (Bloom, 1956), the difficulty level (İnce, 2008) or the question goal (Lehnert, 1977).
|
we found that this domain lacks datasets and taxonomies that aim to analyze questions with respect to expected answer types.
|
contrasting
|
train_19793
|
Although multiple applications can benefit from analyzing questions based on this criterion, the majority of datasets and taxonomies were designed for question answering systems.
|
this type of classification can have multiple potential applications in educational systems as well, from facilitating student assessment to identifying the students' knowledge gaps in order to initiate classrooms discussions.
|
contrasting
|
train_19794
|
More specifically, 24% of disagreements were for the Con-textSensitive class, followed by VeryShortAnswer and Oth-erConstructedResponse with 18% each.
|
the annotators agreed in all cases when labeling Drawing and Ordered questions.
|
contrasting
|
train_19795
|
This can be explained by the fact that these classes have clearer patterns in data and can be easily separated from the other class types.
|
identifying if a question elicits context sensitive information or a short versus a longer response appears to be more subjective, based on each annotator's interpretation.
|
contrasting
|
train_19796
|
This is an important observation regarding our data, since it was shown that involving deep questioning during tutoring can improve knowledge learning (Chi et al., 1994).
|
the least frequent question types in the data collected during tutoring are SelectN, Drawing and Equation.
|
contrasting
|
train_19797
|
Although Equation and Solution have less training examples, these classes possess clearer patterns in the data.
|
the worst performing class is Clarification, for which the simple classifier (strawman) was not able to capture patterns.
|
contrasting
|
train_19798
|
Like CRFs, RNN based model also rely on annotated corpus heavily.
|
we found that the RNN based model can identify more loanwords (more than person names) than CRFs based model, a possible reason is that the RNN encoder-decoder framework can learn features automatically and use its internal memory to process arbitrary sequences of inputs.
|
contrasting
|
train_19799
|
Distant supervision has been widely used in the task of relation extraction (RE).
|
when we carefully examine the experimental settings of previous work, we find two issues: (i) The compared models were trained on different training datasets.
|
contrasting
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.