id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_93900 | And our analysis shows that on average 0.72% and 3.56% relative improvement of 2-hop DCA-(SL/RL) over 1-hop DCA-(SL/RL) or baseline-SL (without DCA) is statistically significant (with P-value < 0.005). | as Figure (4.c) shows, when |E| increases, the running time of these two global EL models increases shapely, while our DCa model grows linearly. | neutral |
train_93901 | To reduce the high cost of computing spectral norm σ(W ) using singular value decomposition at each iteration, we follow (Yoshida and Miyato, 2017) and employ the power iteration method to estimate σ(W ) instead. | similarly, to extract the city-traffic related event, Anantharam et al. | neutral |
train_93902 | Nevertheless, newsworthy events are often discussed by many tweets or online news articles. | for calculating the precision of the 4-tuple, we use following criteria: • (1) Do the entity/organization, location, date/person and keyword that we have extracted refer to the same event? | neutral |
train_93903 | We denote the above measurement as the Pattern Mover Similarity (PMS) measurement. | the pattern mover similarity network (PMSN) is a unified model for adaptively scoring the similarity of entities or patterns to seed entities. | neutral |
train_93904 | Besides, due to the sparse supervision problem, entity scoring is often unreliable, which in turn influences the delayed feedback estimation. | base on this intuition, we devise the reward function as follows: where E 0 is the set of known entities in root node, E is the set of new entities, SIM(e, E) is the similarity score of newly extracted entity e to known entities, σ(•) is the sigmoid function, and a is a "temperature" hyperparameter. | neutral |
train_93905 | The pattern mover similarity network (PMSN) is a unified model for adaptively scoring the similarity of entities or patterns to seed entities. | this step evaluates generated patterns using sparse supervision and other sources of evidence, e.g., pattern embedding similarity. | neutral |
train_93906 | The lower performance on TTL and FAC entities is likely due to the fact that the context patterns of TTL and FAC entities are similar to those of person and location names respectively, which makes them easily be regarded as special person names and location names respectively. | the whole MCTS algorithm looks like a tree structure (see Figure 2(b)), where the node s represents for known entities, i.e., both seed entities and previously extracted entities, and the edge, (b) The Monte Carlo Tree Search for the pattern evaluation in a bootstrapping system for entity set expansion. | neutral |
train_93907 | We apply GCN to construct multi-lingual structural representations for cross-lingual transfer learning. | here, A i,j = 1 denotes the presence of a directed edge from node i to node j in the dependency tree. | neutral |
train_93908 | These approaches, however, incorporate language-specific characteristics and thus are costly in requiring substantial amount of annotations to adapt to a new language (Chen and Vincent, 2012;Blessing and Schütze, 2012;Li et al., 2012;Danilova et al., 2014;Agerri et al., 2016;Hsi et al., 2016;Feng et al., 2016). | two entity mentions are unlikely to be combined into one word in Arabic, thus Relation Extraction does not suffer from tokenization errors and corresponding POS features. | neutral |
train_93909 | In all annotated sentences, 68, 124 are used for training, 22, 631 for validation and 15, 509 for testing. | • Clean-Sentence Clean-Label (CSCL): All sentences and all labels are clean (Figure 1(a)). | neutral |
train_93910 | Distant supervision was proposed in (Mintz et al., 2009) to automatically generate large dataset through aligning the given knowledge base to text corpus. | some of them, including (Riedel et al., 2010;Zeng et al., 2015;Lin et al., 2016;Ji et al., 2017;Zeng et al., 2018;Feng et al., 2018;Wang et al., 2018b,a), formulate the task as a multi-instance learning problem where only one label is allowed for each bag. | neutral |
train_93911 | Assume the event triggering as the starting node (the initial EDAG), there comes a series of path-expanding sub-tasks following a predefined event role order. | for each document and each event type, we pick one predicted record and one most similar ground-truth record (at least one of them is non-empty) from associated event tables without replacement to calculate event-role-specific true positive, false positive and false negative statistics until no record left. | neutral |
train_93912 | (2) To a certain extent, the NPN model could alleviate the problem by utilizing hybrid representation learning and nugget generator in a fix-sized window. | for each character, the model should identify if it is a part of one trigger and correctly classify the trigger into one specific event type. | neutral |
train_93913 | (2018) and Sohrab and Miwa (2018) with Fscore value on all categories. | our model can locate entities precisely by detecting boundaries using sequence labeling models. | neutral |
train_93914 | Hard parameter sharing greatly reduces the risk of overfitting (Baxter, 1997) and increases the correlation of our boundary detection module and categorical label prediction module. | in our model, it is inconvenient and inefficient for the reason that we predict entity categorical labels after all boundary-relative regions have been detected. | neutral |
train_93915 | The dataset covers over 31,000 sentences corresponding to over 590,000 tokens. | the hidden state of bidirectional LSTM can be expressed as following: where x t i is the token representation which is mentioned in section 3.1. | neutral |
train_93916 | Recent works that automatically canonicalize OpenKGs, including (Galárraga et al., 2014) and CESI (Vashishth et al., 2018), pose canonicalization as a clustering task of the NPs and RPs. | we thank the anonymous reviewers for their constructive comments. | neutral |
train_93917 | Previously, statistical models (Mintz et al., 2009;Hoffmann et al., 2011;Surdeanu et al., 2012) have used designed features, such as syntactic and lexical features, and have then been trained by logistic regression or expectation maximization. | the amount of CNN filter is set among {64, 128, 230, 256}. | neutral |
train_93918 | Huang and Wang (2017) demonstrated that multi-layer ResCNNs network does achieve performance improvement by adding residual identity shortcuts, which aligns with the study that deeper CNN has positive effects on noisy NLP tasks (Conneau et al., 2017). | (4) State-of-the-art DSRE Noise Filter Systems: DSGAN (Qin et al., 2018) is a model to filter out noise instances. | neutral |
train_93919 | Figure 1: Mislabeling issue example in DSRE: the entity pairs bag containing three sentences is labeled as person/company/founder. | in a minibatch, entity pair bags with conflicts in sentence selection are dropped, the remaining bags are used to update the network parameters. | neutral |
train_93920 | We adopt the cross-entropy loss as the training objevtive function. | here we get a vector representation w i ∈ R kw for each word from a pre-trained word embedding matrix. | neutral |
train_93921 | In this paper, we propose REDS2 2 , a new neural relation extraction method in the multiinstance learning paradigm, and design a hierarchical model structure to fuse information from 1-hop and 2-hop DS. | treating S T equally as S may mislead the prediction, especially when their sizes are extremely imbalanced. | neutral |
train_93922 | In this task, entity representations can be obtained either by contextualized entity representations or descriptive entity representations. | we use contextualized entity representations to decode the hyperlinked Wikipedia description, and also use the descriptive entity representations to decode the linked context. | neutral |
train_93923 | We conduct extensive experiments on five public datasets from various domains (summarized in Table 1 and detailed in Appendix A). | the cumulative reward from current time step to the end of an episode would cancel the intermediate rewards and thus reflect whether the current action improves the holistic label assignment or not. | neutral |
train_93924 | Different from most of existing global HTC methods that rely on pre-specified features (Gopal and Yang, 2013) as input or build on specific models (Cai and Hofmann, 2004;Vens et al., 2008;Silla Jr and Freitas, 2009), our framework is trained in an end-to-end manner by leveraging a differentiable feature representation learning model as the base model. | at the time of inference, we greedily select labels with the highest probability asL x i . | neutral |
train_93925 | The local loss of HiLAP-SL is defined as where T is the lowest label's 449 level of one example and O t estimates the binary cross entropy over the candidate labels C(l t ): Intuitively, HiLAP-SL works as if there were a set of local classifiers, although most of its parameters (except for the label embedding l) are shared by all the labels so that there is no need to train multiple classifiers. | the policy P puts x i at the root label in the beginning. | neutral |
train_93926 | It demands us to extract the separate senses from the overall word representation to avoid the ambiguity. | firstly, we construct a matrix X i the columns of which is the normal vectors w of the hyperplanes for the ith word. | neutral |
train_93927 | Our core idea that decomposing the semantic capsules by projecting on hyperplanes is a necessary complement to capsule network to tackle the polysemy problem in various natural language processing tasks. | existing capsule networks for natural language processing cannot model the polysemic words or phrases which contain multiple senses. | neutral |
train_93928 | This result further confirms that the proposed adaptive attention fusion strategy is much helpful to learn the label-specific document representation for multi-label text classification. | then, we can obtain the the linear combination of the context words for each label with the aid of label-word attention score (A (s) ) as follows. | neutral |
train_93929 | L takes adantage of label text to explicitly determine the semantic relation between documents and labels, however, label text is not easy to distinguish the difference between labels (e.g., Management vs. Management movies). | in order to seamlessly integrate the above two parts, an adaptive fusion strategy is designed, which can effectively output the comprehensive document representation to build multilabel text classifier. | neutral |
train_93930 | To make full use of local and global information, Zhao et al. | we use attention mechanism. | neutral |
train_93931 | Here we use GRU as the local feature extractor, and features produced for window x t h+1:t can be represented as: To maintain translation invariant, a max-overtime pooling layer is then applied to CNN or DRNN layer, the pooling result is regarded as the output of Encoder2: Set enc 1 as the global representation produced by Encoder1, required information for a certain window x t h+1:t with size h is defined as g t : where G is a function of interaction mode. | en-coder1 serves as a global information provider, while encoder2 performs as a local feature extractor and is directly fed into the classifier. | neutral |
train_93932 | Our approach simply conditions on the embedding of the latent variable value and therefore does not add many parameters. | similar trends appear for the other datasets. | neutral |
train_93933 | 2016 The task of summarization is to compress long documents by identifying and extracting the most important information from the source documents. | 2015; Post (2018) in machine translation, (Gkatzia and Mahamood, 2015;Reiter and Belz, 2009;Reiter, 2018) in natural language generation, Lee et al. | neutral |
train_93934 | The former, which also makes use of the Neu-ralREG approach (Castro Ferreira et al., 2018a), tackles standard NLG tasks (discourse ordering, text structuring, lexicalization, referring expression generation and textual realization) in sequence, while the latter does not address these individual tasks, but directly tries to learn how to map RDF triples into corresponding output text. | the Transformer could have had difficulties caused by the task's design, where triples and sentences were segmented by tags (e.g. | neutral |
train_93935 | We find that word mover based metrics combining BERT fine-tuned on MNLI have highest correlations with humans, outperforming all of the unsupervised metrics and even supervised metrics like RUSE and S 3 f ull . | its variants consider overlap of unigrams (-1), bigrams (-2), unigrams and skip bigrams with a maximum gap of 4 words (-SU4), longest common subsequences (-L) and its weighted version (-W-1.2), among others. | neutral |
train_93936 | The major improvements come from contextualized BERT embeddings rather than word2vec and ELMo, and from fine-tuning BERT on large NLI datasets. | our goal in this paper is to devise an automated evaluation metric assigning a single holistic score to any system-generated text by comparing it against human references for content matching. | neutral |
train_93937 | We fine-tune BERT on each of these, yielding different contextualized embeddings for our general evaluation metrics. | we also observed that soft alignments (MoverScore) marginally outperforms hard alignments (BERTScore). | neutral |
train_93938 | We are grateful to Rik Koncel-Kedziorski and Hannaneh Hajishirzi for sharing their system outputs. | a biLSTM-based keyphrase reader, with hidden states h e k , is used to encode all keyphrases in M. We also insert entries of <STaRT> and <END> into M to facilitate learning to start and finish selection. | neutral |
train_93939 | The meaning of this sentence is "China encourages foreign merchants to invest in agriculture". | secondly, the sRL performance gap between AU-TODEL and AUTO is large though they provide the same correct syntactic information. | neutral |
train_93940 | Another drawback of VerbNet is its organization into Levin's classes (Levin, 1993), namely, 329 groups of verbs sharing the same syntactic behavior, independently of their meaning. | while PropBank roles do not provide clear explicit semantics, FrameNet roles are explicit but frame-specific (e.g., Ingestor and Ingestibles for the "Ingestion" frame, or Impactor and Impactee for the "Impact" frame). | neutral |
train_93941 | (2018) (89.5% vs 89.6%, respectively). | yet, VerbNet suffers from low coverage, in that it includes only 6791 verbs, which makes it a suboptimal resource for wide-coverage SRL. | neutral |
train_93942 | Furthermore, to Levin's classes 329 Thematic roles 39 Senses 6,791 PropBank Verbs 5,649 Proto-roles 6 Framesets 10,687 WordNet ----Synsets 13,767 VerbAtlas Frames 466 Semantic roles 25 Synsets 13,767 Table 1: Quantitative analysis of popular verbal resources. | propBank's major drawback is that its roles do not explicitly mark the type of semantic relation with the verb, instead they just enumerate the arguments (i.e., Arg0, Arg1, etc.). | neutral |
train_93943 | In the SP task, the inputs are the target sentence together with 4 surrounding sentences. | to make correct predictions, the model needs to be aware of both typical orderings of events as well as how events are described in language. | neutral |
train_93944 | We also experimented with adding hidden layers to the DiscoEval classification models. | all of our sentence embedding models use this loss. | neutral |
train_93945 | Examples (e, f) illustrate unk and never labels. | possession is an asymmetric semantic relation between two entities, where one entity (the possessee) belongs to the other entity (the possessor) (Stassen, 2009). | neutral |
train_93946 | devices (e.g., watch, guitar, cell phone) yield more possessions (alienable and control labels) and most of the possessions are alienable. | example (c) illustrates an alienable possession in which the author possesses the possessee (i.e., the bag) before, during and after tweeting. | neutral |
train_93947 | We show that in addition to using past dialogue, these models are able to effectively use the state of the underlying world to condition their predictions. | ranking Candidates Each of the three tasks has a different method for determining candidates. | neutral |
train_93948 | -full list in Appendix H) which were selected by us to pro- vide both inspiration and cohesion to annotators. | the unseen test set is com-Category: Graveyard Description: Two-and-a-half walls of the finest, whitest stone stand here, weathered by the passing of countless seasons. | neutral |
train_93949 | However, it is not satisfying since texts and images are forced to be in one-to-one correspondence. | following (Karpathy and Li, 2015;Carvalho et al., 2018), we use a max-margin ranking loss to ensure the gap between both terms is higher than a fixed margin γ (cf. | neutral |
train_93950 | Many species are visually similar (e.g., Figure 1, top), making them difficult for a casual observer to label correctly. | it struggles on the two data subsets with highest visual similarity (VISUAL, SPECIES). | neutral |
train_93951 | NYT contains formal, well written news stories from the New York Times Corpus. | in the end, a set of 14 handcrafted patterns are used to extract a predicate-argument triple from each utterance. | neutral |
train_93952 | We first run several existing IE systems on the labeled data and use their extraction results as input features along with other rich features including word embedding, part-ofspeech embedding, syntactic role embedding and syntactic dependency information. | second, supervised IE systems require the target relations to be predetermined and learn to extract only the predefined relations. | neutral |
train_93953 | One strategy for tuple matching would be to enforce an exact match by matching the boundaries of the extracted and benchmark tuples in text. | while most existing Open IE systems extract verbal relations, each of the systems focuses on different relational structures and extraction rules, resulting in heterogeneous results. | neutral |
train_93954 | End-to-end systems Miwa and Bansal, 2016) are a promising solution for addressing error propagation. | table 3 presents the performance of our baseline systems compared to the human performance. | neutral |
train_93955 | For example, the authors of DocRED report that 41% of facts require reasoning over multiple sentences in a document (Yao et al., 2019). | they do not provide an exhaustive annotation of facts, which is needed for end-to-end evaluation of KBP. | neutral |
train_93956 | This performance has been generally attributed to the robust transfer of lexical semantics to downstream tasks. | 2 To benchmark the performance, we compare to three rule-based baselines. | neutral |
train_93957 | Across the three state changes, the model suffers a loss of performance in the movement cases. | we observe that such "post-conditioning" approaches don't perform significantly better than rule-based baselines on the tasks we study. | neutral |
train_93958 | Although the first one has proven to be beneficial during pre-training to some NLP tasks, we want to check how much its influence is to our final translation performance. | as mentioned in Section 3.4, it is interesting to explore the different usage of pre-trained decoders in the MT task. | neutral |
train_93959 | In particular, we can show that when y is fixed (observed), the {z n } N n=1 variables are d-separated (Bishop, 2006) i.e., are mutually independent. | the transformer model however fails to decode the later portions of the long input accurately. | neutral |
train_93960 | In conventional EM algorithm for shallow probabilistic graphical model, the M-step is generally supposed to have closed-form solution. | the model cannot be trained by directly optimizing the data log-likelihood because of its non-convex property. | neutral |
train_93961 | Tasks Table 1 compares LaSyn versions against some of the state-of-the-art models on the IWSLT'14 dataset. | since deep neural network is differentiable, we can update θ by taking a gradient ascent step: The resulting algorithm belongs to the class of generalized EM algorithms and is guaranteed (for a sufficiently small learning rate η) to converge to a (local) optimum of the data log likelihood (Wu, 1983). | neutral |
train_93962 | ‡ ‡": significantly better than SAMPLE without CE (p < 0.01). | the evaluation metric is BLEU (Papineni et al., 2001) as calculated by the multi-bleu.perl script. | neutral |
train_93963 | Serving as a weight assigned to each synthetic sentence pair, sentence-level confidence is expected to help to minimize the negative effect of estimating parameters on sentences with lower confidence. | as the synthetic corpora generated by the NMT model are inevitably noisy, translation errors can be propagated to subsequent steps and prone to hinder the performance (Fadaee and Monz, 2018;Poncelas et al., 2018). | neutral |
train_93964 | Reliable uncertainty quantification is key to building a robust artificial intelligent system. | further enlarging the monolingual corpus hurts the translation performance. | neutral |
train_93965 | It is appealing to find that afte 3 iterations, the distribution of the voting weightsα ij will converge to a sharp distribution and the values will be very close to 0 or 1. | a natural question was raised, Will carefully designed aggregation operations help the Enc-Dec paradigm to achieve the best performance? | neutral |
train_93966 | To support cross-lingual applications, it is essential for us to integrate these language-specific KGs into a unified KG. | suppose we need to align the entities in KG 1 to KG 2 , for each entity e i in KG 1 , we rank each entity e j in KG 2 based on the similarity between e i and e j . | neutral |
train_93967 | Since the conception of linear mapping between two spaces, there are only very small minor improvement in the supervised mapping function itself and largest improvements are obtained by updating the word translation retrieval in the shared space and normalization steps (Artetxe et al., 2018). | second, we adapt an existing technique for piecewise linear regression to instead perform piecewise linear mapping. | neutral |
train_93968 | It also first trains the NMT model on out-of-domain training corpus, and then finetunes it using both out-of-domain and oversampling in-domain training corpora. | to this end, we propose an iterative dual domain adaptation framework for NMt. | neutral |
train_93969 | In this group of experiments, we investigated the impacts of out-of-domain corpus size on our proposed framework. | we conjecture that during the process of knowledge distillation, by assigning non-zero probabilities to multiple words, the output distribution of teacher model is more smooth, leading to smaller variance in gradients (Hinton et al., 2015). | neutral |
train_93970 | As shown in Figure 2(a), previous studies usually mix them into one out-of-domain corpus, which is applicable for the conventional one-to-one NMT domain adaptation. | the other is to use the mixed-domain training corpus to construct a unified NMt model for all domains. | neutral |
train_93971 | In this group of experiments, we investigated the impacts of out-of-domain corpus size on our proposed framework. | we first inspected its impacts on the development sets. | neutral |
train_93972 | In this paper, according to the Generate sequence: Compute KD loss: Compute NLL loss: Compute Agent loss: Model loss: 13 Update gradients for each agent; 14 end 15 until convergence; previous experiments, we find that a simple multitask learning technique without sharing any modules presents promising performance, In Algorithm 1, we describe the overall procedure of our approach. | we argue that our model can bring further improvement using their back-translation technique. | neutral |
train_93973 | In this paper, we propose a universal yet effective learning method for training multiple agents. | according to the empirical achievements of previous studies on training with two agents, it is natural to consider the training with more than two agents, and to extend our study to the multi-agent scenario. | neutral |
train_93974 | The resulting optimization is also called Procrustes problem. | thanks to the pivot language, we can pre-train a source encoder and a target decoder without changing the model architecture or training objective for NMt. | neutral |
train_93975 | However, pivoting requires doubled decoding time and the translation errors are propagated or expanded via the two-step process. | experiments in WMT 2019 French→German and German→Czech tasks show that our methods significantly improve the final source→target translation performance, outperforming multilingual models by up to +2.6% BLeU. | neutral |
train_93976 | For all transfer learning setups, we learned byte pair encoding (BPE) (Sennrich et al., 2016) for each language individually with 32k merge operations, except for cross-lingual encoder training with joint BPE only over source and pivot languages. | to other NMT transfer scenarios (Zoph et al., 2016;Nguyen and Chiang, 2017;Kocmi and Bojar, 2018), this principle has no language mismatch between transferor and Figure 2: Step-wise pre-training. | neutral |
train_93977 | (2014) greatly improved the initial sequence-tosequence architecture; rather than conditioning each target word on the final hidden unit of the encoder, each target word prediction is conditioned on a weighted average of all the encoder hidden units. | there have been several attempts to mitigate this discrepancy, mostly with a view to improving the quality of output translations (Alkhouli et al., 2016(Alkhouli et al., , 2018Chen et al., 2016;Liu et al., 2016) -though some work has focused specifically on alignment (Legrand et al., 2016;Zenkel et al., 2019). | neutral |
train_93978 | Experimental results show that the joint model indeed improves performances on both ZP prediction and translation. | this is a key problem for the pipeline framework, since numerous errors would be propagated to the subsequent translation process. | neutral |
train_93979 | The proposed models consistently outperform other models in all cases, demonstrating the superiority of the joint learning of ZP prediction and translation. | in addition, relying on external ZP prediction models in decoding makes these approaches unwieldy in practice, due to introducing more computation cost and pipeline complexity. | neutral |
train_93980 | Bilingual Content Agreement Intuitively, the translated source contents should be semantically equivalent to the translated target contents, and so do untranslated contents. | in addition, two auxiliary learning signals facilitate GDR's acquiring of our expected functionality, other than implicit learning within the training process of the NMT model. | neutral |
train_93981 | This issue may be attributed to the poor ability of NMT of recognizing the dynamic translated and untranslated contents. | we add additional Redundant Capsules Ω R (also known as "orphan capsules" in Sabour et al. | neutral |
train_93982 | Such methods find the erroneous words by calculating the edit distance between the MT hypothesis and its corresponding reference. | this suggests wrong translation is more severe than missing word. | neutral |
train_93983 | This work represents the preliminary work for more multi-faceted machine translation evaluation, focusing on multiple aspects instead of only a score or ranking, with the goal of push MT techniques to a higher standard. | here, we only briefly list the features adopted in our model as they are all commonly used standard features. | neutral |
train_93984 | Evaluation We evaluate the effectiveness of estimating word importance by the translation performance decrease. | several researchers turn to expose systematic differences between human and NMT translations (Läubli et al., 2018;schwarzenberg et al., 2019), indicating the linguistic properties worthy of investigating. | neutral |
train_93985 | We find that the syntactic information is almost independent of the word importance value. | we analyze the linguistic behaviors of words with the importance and show its potential to improve NMT models. | neutral |
train_93986 | It confirms the existence of important words, which have greater impacts on translation performance. | we measure the word importance by attributing the NMT output to every input word through a gradient-based method. | neutral |
train_93987 | Modern word embeddings encode character-level knowledge (Bojanowski et al., 2017), which should-in principle-enable the models to learn this behaviour; but morphological generalization has never been directly tested. | their ability to generalize decreases as the slots get less frequent and/or the paradigms get larger. | neutral |
train_93988 | order of the machine-translated sentence. | the motivation is to reduce noise induced by problematic word alignments. | neutral |
train_93989 | Previous work aims to build a well-formed tree (Tiedemann and Agić, 2016) from source dependencies, solving word alignment conflicts by heuristic rules. | the code and related data will be released publicly available under Apache License 2.0. | neutral |
train_93990 | (2018) used UD Treebanks 2.1, which is not the most up-to-date version. | the structure loss L is the pointing loss for the pointer network: where θ denotes the model parameters, y <t represents the subtrees that have been generated by our parser at previous steps, and t is the number needed for parsing the whole sentence (i.e., number of words in dependency parsing and spans containing more than two EDUs in discourse parsing). | neutral |
train_93991 | Also, the decoder state for "sell" is far apart from the one for "pens". | considering the performance has already exceeded the human agreement of 95.7 F 1 , this gain is remarkable. | neutral |
train_93992 | In this paper, we propose a Hierarchical Pointer Network (H-PtrNet) parser to address the above mentioned limitations. | more relevant information could be diminished in a sequential decoder, especially for long range dependencies. | neutral |
train_93993 | In this paper we aim to render semi-supervised learning for semantic role labeling as simple as possible, by eliminating the reliance on multiple external pre-processing tools. | figure 2 illustrates the auxiliary modules and the types of context they see. | neutral |
train_93994 | Our model does not rely on external tools, and is generally applicable across semantic role representations based on dependencies or constituents (i.e., phrases or spans). | in the third block, we remove cross-view training from our model, and observe a 0.8% drop in F 1 over the full model. | neutral |
train_93995 | We also apply CVT on the first hidden layer of the sentence learner to further improve the performances of auxiliary tasks, utilizing the views introduced in for sequence tagging and dependency parsing. | we add the following four auxiliary prediction modules to the model: The "forward" module makes predictions without seeing the right context of the current word. | neutral |
train_93996 | Some of the previous work also proposes sequence labeling models with shared parameters between languages for performing cross-lingual knowledge transfer (Lin et al., 2018;Cotterell and Duh, 2017;Yang et al., 2017;Ammar et al., 2016;Kim et al., 2017). | we also validate our approaches on a distant language pair, English-Chinese, and the results are competitive with previous methods which use large-scale parallel corpora. | neutral |
train_93997 | N LL i is the negative log likelihood of language i in Eq (4). | we leverage the mean and variance of internal distributions for alignment. | neutral |
train_93998 | This method achieved stateof-the-art performance on the four datasets. | the F1 score of the OntoNotes and Weibo datasets even suffered a serious reduction around 1.5% and 1.8%, respectively. | neutral |
train_93999 | In our CM-Net, information exchanges are performed simultaneously with knowledge diffusions in both directions. | in the first CM-block, the hidden state h t is initialized with the corresponding word embedding. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.