id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_94500 | In recent years, neural network based prediction methods have shown great success on this Twitter user geolocation prediction task (Rahimi et al., 2017;Miura et al., 2017). | with the feature fusion layer, our model can accommodate various feature combinations and achieves state-of-the-art results over three commonly used benchmarks under different feature settings. | neutral |
train_94501 | We only get metadata for about 53% and 67% users in Twitter-US and Twitter-World respectively. | such private information is only accessible to internet service providers. | neutral |
train_94502 | In order not to over-burden the workers, we filtered out conversations consisting of more than 20 comments. | (2017) is different from the other datasets as it investigates the behavior of hate-related users on Twitter, instead of evaluating hate-related tweets. | neutral |
train_94503 | Second Order Consistency Feng et al. | we examine the model using the simple leaveone-out method similar to Ribeiro et al. | neutral |
train_94504 | 2) we cannot guarantee that the resulting text is still a natural input. | as we expect, tokens strongly associated with aggression, such as " ", "dead" and many other curse words appear frequently as the most influential tokens. | neutral |
train_94505 | Since the majority of tweets usually has rationale rank 0, the averaged rationale rank is small and the difference between each model is "diluted" by the prevailing zeros. | surprisingly, words such as "a" and "on" are frequently considered the most influential by the model as well. | neutral |
train_94506 | Because of the lower quality, we omit these language-pairs from our experiments below. | the English-tagalog and English-Indonesian tasks yielded 99% precision, the English-Romanian and English-Greek tasks 87%, and the English-Russian task 85%. | neutral |
train_94507 | Further, compared with MTL2 which uses a "parallel" architecture to make predictions for two tasks, our Hierarchical-PSV performs better than MTL2. | (1): where the multiplication operation expands the receptive field of a GCN layer, and adding an identity matrix elevates the importance of t i itself. | neutral |
train_94508 | (2017) project the utterances and intent labels to a same semantic space and then compute the similarities between utterances and intent labels (Chen et al., 2016;Kumar et al., 2017). | for example, the word "book" has different meanings in the two utterances: "Book a restaurant in Michigan for 4 people" and "Give 4 out of 6 points to this book". | neutral |
train_94509 | To improve business effectiveness and user satisfaction, accurately identifying the intents behind user ut-terances is indispensable. | to implement zero-shot intent classification more easily and intelligently, recent works rely more on the word embeddings of intent labels, which can be easily pretrained on text corpus. | neutral |
train_94510 | 13, the overall loss function of the proposed dimensional attention capsule network is: where β is a trade-off parameter. | it is extremely challenging not only because user queries are sometimes short and expressed diversely, but also because it may continuously encounter new or unacquainted intents popped up quickly from various domains. | neutral |
train_94511 | For this purpose, we investigate domain adaptation for person-job fit, focusing on the text match between job postings and candidate resumes. | now, we study how such a mechanism work and why it is useful to improve the performance. | neutral |
train_94512 | A simple transfer strategy is to share the whole matrix in both target and source domains. | for our deep match network, we also prepare three simple single-domain variants, include (1) Tgt-Only trains the model with only target domain data, (2) Src-Only trains the model with only source domain data, and (3) Mixed trains the model with a simple mixture of source and target domain data. | neutral |
train_94513 | Due to space limit, we only select six sentences for each document. | such a method will make it less flexible to capture domain-specific match information, since the local match information is likely to be varied across domains. | neutral |
train_94514 | For example, designs a character-level CNN which alleviates the sparsity by mining different levels of information within the texts. | as shown in Table 3, we compare our HGaT with four variant models. | neutral |
train_94515 | A possible explanation is that the Paragraph Vector model assigns a distinct vector to each of the news articles, thus inconsistency between news in different news browsing sequences is easily created by the sequence-level fine-tuning. | bERT (Devlin et al., 2018) proposes a powerful pre-trained sentence encoder which is trained on unsupervised datasets based on the masked language model and next sentence prediction. | neutral |
train_94516 | Finally, we evaluate our LS system in an endto-end manner and compare its performance to that of the current state-of-the-art systems, including those reported in Paetzold and Specia (2016a) and the DRESS-LS system of Zhang and Lapata (2017). | we then compare the results to the SOTA simplification systems DRESS-LS by Zhang and Lapata (2017), which contains a specialised LS model, and the P&S simplification system (Paetzold and Specia, 2016a). | neutral |
train_94517 | We also show that combining out-of-domain, out-of-genre Medical Literature data with out-of-domain EHRs can provide significant improvement over using just out-ofdomain EHRs at the section and sentence level, depending on training data size. | we also found that Past Medical History was often incorrectly labeled as Medications. | neutral |
train_94518 | We follow this approach using our RNN as well as the BERT model where we first tune BERT to the MedLit data and then continue to tune that model on the EHR data. | it is common that the exact format of sections in a particular EHR does not adhere to known patterns. | neutral |
train_94519 | 1, the word "NFL" is more informative than "Today" for representing this news. | the attention weights of the search query and webpage title views are computed similarly. | neutral |
train_94520 | Second, webpage titles are more important than search queries. | we apply a CNN network to learn contextual representations of words within news titles by capturing their contexts. | neutral |
train_94521 | 1 both users may have similar interests on Star Wars, and both books have related topics. | in addition, different words and sentences in the same review may also have different importance. | neutral |
train_94522 | This is probably because our approach can mine the user-user and item-item relatedness by modeling the first-order and second-order interactions between users and items. | by optimizing the loss function, all parameters can be tuned via backward propagation. | neutral |
train_94523 | For example, in the sentence "This story book is very interesting" , the word "interesting" is more important than the word "story" in representing this book. | first, we want to validate the effectiveness of the attention network in the review content-view. | neutral |
train_94524 | In addition, we propose to apply a three-level attention network to select important words, sentences and reviews to learn informative user and item representations. | selecting important words via a word-level attention network can learn more informative sentence representations. | neutral |
train_94525 | (2018) replace words with semantically and syntactically similar adversarial examples. | moreover, DISP works better on the random attack because the embeddings of the original tokens tend to have noticeably greater Euclidean distances to randomlypicked tokens than the distances to other tokens. | neutral |
train_94526 | According to (Hirschauer, 2010), coverage and divergence should be considered for the acceptance decision of a paper. | furthermore, some researchers casted the problem as a time series task, and focused on analyzing temporal features or patterns in the process of citation growth (Davletov et al., 2014;Xiao et al., 2016;Yuan et al., 2018). | neutral |
train_94527 | They typically perform on the nodes by updating the representations during training. | we connect distinct nodes based on simple heuristic rules and generate different edge representations for the connected nodes. | neutral |
train_94528 | In this setting, entities mapped to the same KB ID are considered as mentions of an entity concept and pairs of mentions correspond to the pair's multiple instances. | we name our model EoG, an abbreviation of Edge-oriented Graph. | neutral |
train_94529 | We briefly describe the model settings in this section. | document-level RE is not common in the general domain, as the entity types of interest can often be found in the same sentence (Banko et al., 2007). | neutral |
train_94530 | With the released data, we randomly sample 5,000 sentence pairs from it as the parallel corpus with limited data volume. | then the contentonly representation is coupled with the representation of a style that differs from the input to produce a style-transferred sentence. | neutral |
train_94531 | We analyse the sentences generated by S2S model and by the CPLS model, and the statistics show that the average length of the text generated by S2S model is shorter, which may lead to the bias of the style classifier. | bLEU calculates the N-gram overlap between the generated sentence and the references, thus can be used to measure the preservation of text content. | neutral |
train_94532 | Compared with the Chinese literacy style datasets, the formality datasets are less challenging as discussed before. | this result may be explained by the fact that the edit distance between formal and informal texts are smaller than between ancient poems and modern Chinese texts. | neutral |
train_94533 | The reason we choose the Chinese ballad is that the content domain of the two styles should be close. | it is more challenging for model to preserve the content meaning when transferring between ancient poems and modern Chinese text. | neutral |
train_94534 | 3% of questions are unlikely to have an answer anywhere (e.g., 'what guides Santa home after he has delivered presents?'). | this makes privacy a well-motivated application domain for NLP researchers, where advances in enabling users to quickly identify the privacy issues most salient to them can potentially have large real-world impact. | neutral |
train_94535 | Experiments conducted a large-scale real-world product description dataset show that our model achieves the state-of-the-art performance in terms of both traditional generation metrics as well as human evaluations. | non-entity words are labeled as "normal word". | neutral |
train_94536 | In this section, we formally define the problem of Fine-Grained Entity Typing in KB. | yao Ming is associated with a typepath: BasketballP layer ⊂ Athlete ⊂ P erson ⊂ Agent ⊂ T hing Many researches have been carried out in this field. | neutral |
train_94537 | Other more recent approaches include two-level reinforcement learning models (Takanobu et al., 2019), two layers of attention-based capsule network models , and self-attention with transformers (Verga et al., 2018). | through ablation analysis, we observe that the memory buffer and the KG features contribute significantly to this performance gain. | neutral |
train_94538 | DR: How often do you have pain in your arms? | (Miwa and Sasaki, 2014;Katiyar and Cardie, 2016;Zhang et al., 2017;Zheng et al., 2017;Verga et al., 2018;Takanobu et al., 2019) also seek to jointly learn the entities and relations among them together. | neutral |
train_94539 | Columns 1, 2, 3: lower is better / 4, 5, 6: higher is better. | though SCLS ( Step 3) is a convex problem, Figure 4 shows that AP+ADMM and AP+ActiveSet improve Specificity and Sparsity over AP+ExpGrad, making the learned topics even more comparable to Gibbs. | neutral |
train_94540 | As a result, the basic Latent Dirichlet Allocation (Blei et al., 2003) is still prevalent for practitioners despite various recent advances (Srivastava and Sutton, 2017;Xun et al., 2017;Xu et al., 2018). | due to the efficiency of anchor-based inference, the moment construction often becomes the most expensive step for large corpora, but it is trivially parallelizable as the last averaging step is the only computation that couples individual documents. | neutral |
train_94541 | Alternatively, one can leverage third-order moments to provide sufficient statistics for identifiable topic inference (Anandkumar et al., 2012a,b). | overall, correlated topic modeling via tensor decomposition is not as flexible as using JSMF even if we factor out the trivial difference in time and space complexities. | neutral |
train_94542 | Provided with the set of anchor words S and the convex coefficientsB ki = {p Z 1 |X 1 (k|i)}, one can easily recover B by applying Bayes' rule: , where c i := p X 1 (i) is the unigram probability of the word i, which is equal to j C ij . | our RAW method takes less than 10 minutes to find 50 topics on Songs. | neutral |
train_94543 | Meanwhile, one of the distractors has a lower score even though it shares the lexical item "Walgreens" with the context of the edit. | our architecture tackles both Comment Ranking and Edit Anchoring tasks by encoding specific edit actions such as additions and deletions, while also accounting for document context. | neutral |
train_94544 | It is important to recall that the main goal of our work is to develop fast and efficient on-device neural text classification approach, which can achieve near state-of-the-art performance while satisfying the on-device small size and memory resource constrains. | we train the network with cross entropy loss and apply softmax over the output layer to obtain predicted probabilities y C k for each class C during inference. | neutral |
train_94545 | Also, characterlevel models are prone to errors due to long-range dependency (Sennrich, 2016). | mRR and PmRR are highly dependent on the length distribution of test data. | neutral |
train_94546 | Currently, neural machine translation systems widely use subword segmentation as de facto. | common practice is to uniformly sample from all possible prefixes within minimal length constraint in characters (or words) real distribution of the prefix length may differ to the uniform distribution. | neutral |
train_94547 | After extracting candidates, reranking algorithms (e.g., LambdaMART (Burges, 2010)) with additional features are used to align final candidates. | we use the same model size for the character baseline and our variants for the fairness since our goal is proposing a new method and comparing between baseline and ours rather than achieving the best performance with the restriction of having a similar number of parameters. | neutral |
train_94548 | We search for the sentence with the same word w pi from the current document, and feed the found sentence into the same Bi-LSTM model. | then, we provide some benchmark models on this dataset to boost the research of dialogue symptom diagnosis. | neutral |
train_94549 | To make it more clear, we show the inference results for each symptom with and without graph in Figure 5. | (Yaghoobzadeh and Schütze, 2016) used knowledge base and aggregated corpus-level contextual information to learn an entity's classes. | neutral |
train_94550 | True, False, and Uncertain are the inference results for whether the symptom exists in the patient. | these observations have verified the effectiveness of modeling the associations between symptoms via graphs for symptom inference. | neutral |
train_94551 | Most models use encoder-decoder architecture. | the weight w i,j ∈ (0, 1). | neutral |
train_94552 | For sentence fusion (Section 5.1), the input consists of two sentences, which sometimes need to be swapped. | the performance of the tagger is impaired significantly when leaving out the SWAP tag due to the model's inability to reconstruct 10.5% of the training set. | neutral |
train_94553 | Different from traditional NQG task, the goal of CQG is to enhance the interactiveness and persistence of chit-chatting. | the research of open-domain conversational question generation (CQG) Hu et al., 2018) is still in its infancy. | neutral |
train_94554 | To speed up convergence, we initialize our model through pre-training the reading network and the generation network. | the salient spans and the news title are then fed to the generation network to synthesize a comment. | neutral |
train_94555 | (2018) do not publish their code for metric calculation, we employ a popular NLG evaluation project available at https://github.com/ Maluuba/nlg-eval, and modify the scripts with the scores provided in the data according to the formulas in (Qin et al., 2018) to calculate all the metrics. | a news article and a comment is not a pair of parallel text. | neutral |
train_94556 | However, since the encoder in the topic modeling component is expected to be as a discriminator and guarantee that texts generated by the sequence modeling decoder have specific topic information, the encoder cannot be with the discrete representations (e.g., one-hot representation) of texts as input. | we compare several baselines including models based on language model (i.e., LSTM LM, LSTM+LDA, Topic-RNN, TDLM) and models based on VAE (i.e., LSTM VAE, VAE+HF, TGVAE, DCNNVAE, DVAE). | neutral |
train_94557 | 2017; Semeniuta et al. | this work has been supported by National Key R&D Program of China (No. | neutral |
train_94558 | DVAE (Xiao et al., 2018) uses a Dirichlet latent variable to improve VAE. | no extra training supervised by labeled data is needed for this discriminator. | neutral |
train_94559 | Inferring the objects from the vision side helps learn the object relationships, and inferring from the language side helps learn the crossmodality alignments. | the learning rate is set to 1e−4 instead of 5e − 5. | neutral |
train_94560 | One important difference between the standard use of CRFs for sequence labeling and our task is that our "labels" do not correspond to a fixed set of classes that can be predicted for any input, but are as specific to the particular input example as the sequences to be labeled themselves. | there could be more than one candidate region with high IoU with the gold region, and they should all be considered as correct grounding for the phrase. | neutral |
train_94561 | For each dialogue, we invite at least four different workers to annotate. | after that, we concatenate the embeddings of the starting word (x * start ) and the ending word (x * end ) of each span, as well as its weighted embedding (x i ) and the length feature (φ(i)) to form its final representation e: On top of the extracted mention representation, we then compute the contextual score as follows: where [ ] represents the concatenation, e p and e n are the mention representation of the targeting pronoun and current candidate mention, and ⊙ indicates the element-wise multiplication. | neutral |
train_94562 | In this work, we focus on jointly leveraging the contextual and visual information to resolve pronouns. | extensive experiments demonstrate the effectiveness of the proposed model. | neutral |
train_94563 | We extract key frames within the annotated segment of each step via comparing similarity of different frames followed by manual filtering of redundant frames. | we annotate the theme for each makeup video. | neutral |
train_94564 | Based on the list, we search their official channels on the YouTube and crawl makeup instructional videos together with available meta data such as video id, duration, title, tags and English subtitles generated by YouTube automatically. | due to the data limitation, fine-grained semantic comprehension which requires to capture semantic details of multimodal contents has not been well investigated. | neutral |
train_94565 | Without bells and whistles, DEBUG surpasses the performance of the state-of-the-art models over various benchmarks and metrics at the highest speed. | we set the ground truth confidence of each frame based on its "centerness" in the segment. | neutral |
train_94566 | For example, Abney et al. | as we have seen in the Section 2, human curated NER datasets are by no means perfect. | neutral |
train_94567 | These numbers become 7000 and 4000 when k = 5, and 9000 and 7000 when k = 10. | in the forth sentence, looking at its paragraph, our annotators figure out that this is a table about ships and vessels loading items at different locations. | neutral |
train_94568 | We first randomly partition the training data into k folds: We then train k NER models separately based on these k folds. | we use f (D, w) to describe the training process of an NER model using the training set D weighted by w. This training process will return an NER model M = f (D, w). | neutral |
train_94569 | With more iterations, the confidence of being correct lowers like a binomial distribution, which is the reason that we chose an exponential decaying weight function in Equation 4. | in this paper, we conduct empirical studies to understand these mistakes, correct the mistakes in the test set to form a cleaner benchmark, and develop a novel framework to handle the mistakes in the training set. | neutral |
train_94570 | We conduct extensive experiments on both the original CoNLL03 NER dataset and our corrected dataset. | we first estimate the precision of the detected mistakes of a single iteration. | neutral |
train_94571 | Recent advances in deep learning have yielded stateof-the-art performance on many sequence labeling tasks, including NER (Collobert et al., 2011;Ma and Hovy, 2016;Peters et al., 2018). | we begin by training a NER model Θ using the above model's outputs as training data. | neutral |
train_94572 | We present a bootstrapping recipe for improving low-resource NER. | we first compare the NER performance on the same number of annotated tokens. | neutral |
train_94573 | Besides, the code for computing Grad-CAM-Text was adapted from keras-vis 6 , whereas we used scikit-learn (Pedregosa et al., 2011) for decision tree construction. | for the n-gram random baseline (and other n-gram based explanation methods in this paper), n is one of the CNN filter sizes [2,3,4]. | neutral |
train_94574 | We proposed three human tasks to evaluate local explanation methods for text classification. | a confident but incorrect answer results in a large negative score. | neutral |
train_94575 | performance gain of the four proposed models, the retrieval precisions of GMSH, BMSH, GMSH-S and BMSH-S using 32-bit hashing codes on the three datasets are plotted together in Figure 4. | for BMSH, the difference between the best and worst precisions on the three datasets are 0.0123, 0.0052 and 0.0134, respectively, which are small comparing to the gains that BMSH has achieved. | neutral |
train_94576 | Generative hashing is often used to generate hashing codes in an unsupervised way. | it can be obviously seen that GMSH-S and BMSH-S outperform GMSH and BMSH by a substantial margin, respectively. | neutral |
train_94577 | To increase the modeling ability of (1), we may resort to more complex likelihood p θ (D|z), such as using deep neural networks to relate the latent z to the observation x i , instead of the simple softmax function in (2). | hashing is promising for large-scale information retrieval tasks thanks to the efficiency of distance evaluation between binary codes. | neutral |
train_94578 | Furthermore, it is observed that the latent embeddings learned by BMSH-S can be clustered almost perfectly. | the averaged value over all testing documents is then reported. | neutral |
train_94579 | The samples from the Bernoulli mixture can be generated by first choosing a component c ∈ {1, 2, • • • , K} from Cat(π) and then drawing a sample from the chosen distribution Bernoulli(γ c ). | a mapping from the latent representation z to the cor-responding label y is learned for each document. | neutral |
train_94580 | In (Chaidaroon and Fang, 2017), variational deep semantic hashing (VDSH) is proposed to solve the semantic hashing problem by using the variational autoencoder (VAE) (Kingma and Welling, 2013). | data dependent hashing seeks to learn a hash function from the given training data in a supervised or an unsupervised way. | neutral |
train_94581 | In this paper, we improve the top-1 MIPS performance by graph-based index. | for example, on fastTextfr, to reach the recall at 95%, ip-NSW requires about 0.3% computations while IPDG only needs 0.07% computations. | neutral |
train_94582 | We found that the edge selection method is vital for the trade-off of effectiveness and efficiency in searching. | for un-normalized vectors, although cosine similarity is still widely applied, the final matching scores of word embeddings are usually weighted (Acree et al., 2016;Srinivas et al., 2010) by ranking-based coefficients (e.g., the side information), which transforms the problem back to search via inner product (see Eq. | neutral |
train_94583 | To address this limitation, one solution is to incorporate relational dependencies of different words into their embeddings. | following (Keller and Lapata, 2003) and (de Cruys, 2014), we select three dependency relations (nsubj, dobj, and amod) as follows: • nsubj: The preference of subject for a given verb. | neutral |
train_94584 | It verifies that the multiplicative composition used in our approach is able to introduce new information to the base codebook. | given the over-parameterization of deep neural nets, the effective compression of them has been receiving increasing attention from the research community. | neutral |
train_94585 | (2017) introduce the Word Embedding Association Test (WEAT), which provides results analogous to earlier psychological work by Greenwald et al. | previous mitigation attempts rely on the operationalisation of gender bias as a projection over a linear subspace. | neutral |
train_94586 | coreference resolution (Rudinger et al., 2018;Zhao et al., 2018). | bias is removed only insofar as the operationalisation allows. | neutral |
train_94587 | Hybrid Ori results in smallest difference for cosine similarity and MRR gap between two gender forms. | we translate a query word and report the precision at k (i.e., fraction of correct translations that are ranked not larger than k) by retrieving its k nearest neighbours in the target language. | neutral |
train_94588 | For gender in semantics, we follow the literature and address only binary gender. | given an English word E i (a noun, an adjective or a verb), an English occupation word E o , and the corresponding Spanish/French translation S i of E i , we adopt the analogy test "E i :E o = S i :?" | neutral |
train_94589 | Figure 1 shows that the inanimate nouns lie near the origin point on the semantic gender direction, while the masculine and feminine forms of the occupation words are on the opposite sides for both directions. | extensive effort has been put toward analyzing and mitigating gender bias in word embeddings (Bolukbasi et al., 2016;Zhao et al., 2018b;Dev and Phillips, 2019;Ethayarajh et al., 2019). | neutral |
train_94590 | Experiments show that BILEXNET substantially outperforms translation baselines and approaches the performance of a fully supervised English semantic relation classifier (Section 7). | in BILEXNET, words of the Equivalence class can occur in a parallel sentence pair where they are aligned to each other. | neutral |
train_94591 | We use OntoNotes Release 5.0 1 , which contains a number of annotations including word senses for Chinese. | on average across the test sets, their approach did not outperform SVM with word embedding features. | neutral |
train_94592 | We then split 80% of the numbers into a training set and 20% into a test set. | we investigate this using a numerical extrapolation setting: we train models on a specific integer range and test them on values greater than the largest training number and smaller than the smallest training number. | neutral |
train_94593 | A: 53 Q: How long was the shortest touchdown pass? | we can understand the source of numeracy by isolating and probing these embeddings. | neutral |
train_94594 | Furthermore, subtracting a baseline (Weaver and Tao, 2001) on R(ã, z) is also applied to reduce variance. | recently, Natural Language Interfaces to Databases (NLIDB) has received considerable attention, as they allow users to query databases by directly using natural language. | neutral |
train_94595 | Instead of generating the restated query, we recombine the predicted precedent answerw x and the predicted follow-up answerw y to produce the restated answerw. | in the implementation of the REiNFORCE algorithm, we set M to be 20. | neutral |
train_94596 | It reflects the approximate upper bound of AnsAcc, as the correctness of SQL-related words is a prerequisite of correct execution in most cases; BLEU, referring to the cumulative 4-gram BLEU score, evaluates how similar the predicted queries are to the golden ones (Papineni et al., 2002). | let f j = softmax(A :,j ), where f j ∈ R n denotes the attention weights on x according to y j . | neutral |
train_94597 | As no intermediate annotation is involved, we design re-wards to jointly train the two phases by applying reinforcement learning (RL) (Sutton and Barto, 1998). | contextual information is essential for more accurate and robust semantic parsing, namely contextdependent semantic parsing. | neutral |
train_94598 | As the interaction proceeds, the user question becomes more complicated as it requires longer SQL query to answer. | on SparC, we report two metrics: question match accuracy which is the score average over all questions and interaction match accuracy which is average over all interactions. | neutral |
train_94599 | Furthermore, we also investigate the effect of copying segment. | in the context-dependent scenario, the contextual history is crucial to understand the follow-up questions from users, and the system often needs to reproduce partial sequences generated in previous turns. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.