{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T11:03:49.907923Z" }, "title": "Named Entity Recognition for Social Media Texts with Semantic Augmentation", "authors": [ { "first": "Yuyang", "middle": [], "last": "Nie", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Chinese University of Hong Kong (Shenzhen)", "location": {} }, "email": "" }, { "first": "Yuanhe", "middle": [], "last": "Tian", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Chinese University of Hong Kong (Shenzhen)", "location": {} }, "email": "yhtian@uw.edu" }, { "first": "Wan", "middle": [ "\u2665" ], "last": "Xiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Chinese University of Hong Kong (Shenzhen)", "location": {} }, "email": "wanxiang@sribd.cn" }, { "first": "Yan", "middle": [], "last": "Song \u2660\u2665 \u2020", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Chinese University of Hong Kong (Shenzhen)", "location": {} }, "email": "" }, { "first": "Bo", "middle": [], "last": "Dai", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Chinese University of Hong Kong (Shenzhen)", "location": {} }, "email": "daibo@uestc.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Existing approaches for named entity recognition suffer from data sparsity problems when conducted on short and informal texts, especially user-generated social media content. Semantic augmentation is a potential way to alleviate this problem. Given that rich semantic information is implicitly preserved in pre-trained word embeddings, they are potential ideal resources for semantic augmentation. In this paper, we propose a neural-based approach to NER for social media texts where both local (from running text) and augmented semantics are taken into account. In particular, we obtain the augmented semantic information from a large-scale corpus, and propose an attentive semantic augmentation module and a gate module to encode and aggregate such information, respectively. Extensive experiments are performed on three benchmark datasets collected from English and Chinese social media platforms, where the results demonstrate the superiority of our approach to previous studies across all three datasets. 1 * Equal contribution.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Existing approaches for named entity recognition suffer from data sparsity problems when conducted on short and informal texts, especially user-generated social media content. Semantic augmentation is a potential way to alleviate this problem. Given that rich semantic information is implicitly preserved in pre-trained word embeddings, they are potential ideal resources for semantic augmentation. In this paper, we propose a neural-based approach to NER for social media texts where both local (from running text) and augmented semantics are taken into account. In particular, we obtain the augmented semantic information from a large-scale corpus, and propose an attentive semantic augmentation module and a gate module to encode and aggregate such information, respectively. Extensive experiments are performed on three benchmark datasets collected from English and Chinese social media platforms, where the results demonstrate the superiority of our approach to previous studies across all three datasets. 1 * Equal contribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The increasing popularity of microblogs results in a large amount of user-generated data, in which texts are usually short and informal. How to effectively understand these texts remains a challenging task since the insights are hidden in unstructured forms of social media posts. Thus, named entity recognition (NER) is a critical step for detecting proper entities in texts and providing support for downstream natural language processing (NLP) tasks (Pang et al., 2019; Martins et al., 2019) .", "cite_spans": [ { "start": 453, "end": 472, "text": "(Pang et al., 2019;", "ref_id": "BIBREF20" }, { "start": 473, "end": 494, "text": "Martins et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, NER in social media remains a challenging task because (i) it suffers from the data spar- Figure 1 : An example shows that an NE tagged with \"PER\" (Person) is suggested by its similar words. sity problem since entities usually represent a small part of proper names, which makes the task hard to be generalized; (ii) social media texts do not follow strict syntactic rules (Ritter et al., 2011) . To tackle these challenges, previous studies tired to leverage domain information (e.g., existing gazetteer and embeddings trained on large social media text) and external features (e.g., part-of-speech tags) to help with social media NER (Peng and Dredze, 2015; Aguilar et al., 2017) . However, these approaches rely on extra efforts to obtain such extra information and suffer from noise in the resulted information. For example, training embeddings for social media domain could bring a lot unusual expressions to the vocabulary. Inspired by studies using semantic augmentation (especially from lexical semantics) to improve model performance on many NLP tasks (Song and Xia, 2013; Song et al., 2018a; Kumar et al., 2019; Amjad et al., 2020) , it is also a potential promising solution to solving social media NER. Figure 1 shows a typical case. \"Chris\", supposedly tagged with \"Person\" in this example sentence, is tagged as other labels in most cases. Therefore, in the predicting process, it is difficult to label \"Chris\" correctly. A sound solution is to augment the semantic space of \"Chris\" through its similar words, such as \"Jason\" and \"Mike\", which can be obtained by existing pre-trained word embeddings from the general domain.", "cite_spans": [ { "start": 382, "end": 403, "text": "(Ritter et al., 2011)", "ref_id": "BIBREF24" }, { "start": 645, "end": 668, "text": "(Peng and Dredze, 2015;", "ref_id": "BIBREF21" }, { "start": 669, "end": 690, "text": "Aguilar et al., 2017)", "ref_id": "BIBREF0" }, { "start": 1070, "end": 1090, "text": "(Song and Xia, 2013;", "ref_id": "BIBREF29" }, { "start": 1091, "end": 1110, "text": "Song et al., 2018a;", "ref_id": "BIBREF27" }, { "start": 1111, "end": 1130, "text": "Kumar et al., 2019;", "ref_id": "BIBREF13" }, { "start": 1131, "end": 1150, "text": "Amjad et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 99, "end": 107, "text": "Figure 1", "ref_id": null }, { "start": 1224, "end": 1232, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose an effective approach to NER for social media texts with semantic augmentation. In doing so, we augment the semantic space for each token from pre-trained word embedding models, such as GloVe (Pennington et al., 2014) and Tencent Embedding (Song et al., 2018b) , and encode semantic information through an attentive semantic augmentation module. Then we apply a gate module to weigh the contribution of the augmentation module and context encoding module in the NER process. To further improve NER performance, we also attempt multiple types of pre-trained word embeddings for feature extraction, which has been demonstrated to be effective in previous studies (Akbik et al., 2018; Jie and Lu, 2019; Kasai et al., 2019; Kim et al., 2019; . To evaluate our approach, we conduct experiments on three benchmark datasets, where the results show that our model outperforms the stateof-the-arts with clear advantage across all datasets.", "cite_spans": [ { "start": 218, "end": 243, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF22" }, { "start": 266, "end": 286, "text": "(Song et al., 2018b)", "ref_id": "BIBREF28" }, { "start": 687, "end": 707, "text": "(Akbik et al., 2018;", "ref_id": "BIBREF2" }, { "start": 708, "end": 725, "text": "Jie and Lu, 2019;", "ref_id": "BIBREF8" }, { "start": 726, "end": 745, "text": "Kasai et al., 2019;", "ref_id": "BIBREF9" }, { "start": 746, "end": 763, "text": "Kim et al., 2019;", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task of social media NER is conventionally regarded as sequence labeling task, where an input sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposed Model", "sec_num": "2" }, { "text": "X = x 1 , x 2 , \u2022 \u2022 \u2022 , x n with n tokens is annotated with its corresponding NE labels Y = y 1 , y 2 , \u2022 \u2022 \u2022 , y n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposed Model", "sec_num": "2" }, { "text": "in the same length. Following this paradigm, we propose a neural model with semantic augmentation for the social media NER. Figure 2 shows the architecture of our model, where the backbone model and the semantic augmentation module are illustrated in white and yellow backgrounds, respectively. For each token in the input sentence, we firstly extract the most similar words of the token according to their pre-trained embeddings. Then, the augmentation module use an attention mechanism to weight the semantic information carried by the extracted words. Afterwards, the weighted semantic information is leveraged to enhance the backbone model through a gate module.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 133, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The Proposed Model", "sec_num": "2" }, { "text": "In the following text, we firstly introduce the encoding procedure for augmenting semantic information. Then, we present the gate module to incorporate augmented information into the backbone model. Finally, we elaborate the tagging procedure for NER with the aforementioned enhancement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposed Model", "sec_num": "2" }, { "text": "The high quality of text representation is the key to obtain good model performance for many NLP tasks (Song et al., 2017; Sileo et al., 2019) . However, obtaining such text representation is not easy in the social media domain because of data sparsity problem. Motivated by this fact, we propose se- Figure 2 : The overall architecture of our proposed model with semantic augmentation. An example sentence and its output NE labels are given, where the augmented semantic information for the word \"Chris\" are also illustrated with the processing through the augmentation module and the gate module. mantic augmentation mechanism for social media NER by enhancing the representation of each token in the input sentence with the most similar words in their semantic space, which can be measured by pre-trained embeddings.", "cite_spans": [ { "start": 103, "end": 122, "text": "(Song et al., 2017;", "ref_id": "BIBREF26" }, { "start": 123, "end": 142, "text": "Sileo et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 301, "end": 309, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Attentive Semantic Augmentation", "sec_num": "2.1" }, { "text": "In doing so, for each token x i \u2208 X , we use pretrained word embeddings (e.g., GloVe for English and Tencent Embedding for Chinese) to extract the top m words that are most similar to x i based on cosine similarities and denote them as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attentive Semantic Augmentation", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C i = {c i,1 , c i,2 , \u2022 \u2022 \u2022 , c i,j , \u2022 \u2022 \u2022 , c i,m }", "eq_num": "(1)" } ], "section": "Attentive Semantic Augmentation", "sec_num": "2.1" }, { "text": "Afterwards, we use another embedding matrix to map all extracted words c i,j to their corresponding embeddings e i,j . Since not all c i,j \u2208 C i are helpful for predicting the NE label of x i in the given context, it is important to distinguish the contributions of different words to the NER task in that context. Consider that the attention and weight based approaches are demonstrated to be effective choices to selectively leverage extra information in many tasks (Kumar et al., 2018; Margatina et al., 2019; Tian et al., 2020a,d,b,c) , we propose an attentive semantic augmentation module (denoted as AU ) to weight the words according to their contributions to the task in different contexts. Specifically, for each token x i , the augmentation module assigns a weight to each word c i,j \u2208 C i by", "cite_spans": [ { "start": 468, "end": 488, "text": "(Kumar et al., 2018;", "ref_id": "BIBREF12" }, { "start": 489, "end": 512, "text": "Margatina et al., 2019;", "ref_id": "BIBREF16" }, { "start": 513, "end": 538, "text": "Tian et al., 2020a,d,b,c)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Attentive Semantic Augmentation", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p i,j = exp(h i \u2022 e i,j ) m j=i exp(h i \u2022 e i,j ) ,", "eq_num": "(2)" } ], "section": "Attentive Semantic Augmentation", "sec_num": "2.1" }, { "text": "where h i is the hidden vector for x i obtained from the context encoder with its dimension matching that of the embedding (i.e., e i,j ) of c i,j . Then, we apply the weight p i,j to the word c i,j to compute the final augmented semantic representation by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attentive Semantic Augmentation", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v i = m j=1 p i,j e i,j ,", "eq_num": "(3)" } ], "section": "Attentive Semantic Augmentation", "sec_num": "2.1" }, { "text": "where v i is the derived output of AU , and contains the weighted semantic information. Therefore, the augmentation module ensures that the augmented semantic information are weighted based on their contributions and important semantic information is distinguished accordingly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attentive Semantic Augmentation", "sec_num": "2.1" }, { "text": "We observe that the contribution of the obtained augmented semantic information to the NER task could vary in different contexts and a gate module (denoted by GA) is naturally desired to weight such information in the varying contexts. Therefore, to improve the capability of NER with the semantic information, we propose a gate module to aggregate such information to the backbone NER model. Particularly, we use a reset gate to control the information flow by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Gate Module", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g = \u03c3(W 1 \u2022 h i + W 2 \u2022 v i + b g ),", "eq_num": "(4)" } ], "section": "The Gate Module", "sec_num": "2.2" }, { "text": "where W 1 and W 2 are trainable matrices and b g the corresponding bias term. Afterwards, we use", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Gate Module", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "u i = [g \u2022 h i ] \u2295 [(1 \u2212 g) \u2022 v i ]", "eq_num": "(5)" } ], "section": "The Gate Module", "sec_num": "2.2" }, { "text": "to balance the information from context encoder and the augmentation module, where u i is the derived output of the gate module; \u2022 represents the element-wise multiplication operation and 1 is a 1-vector with its all elements equal to 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Gate Module", "sec_num": "2.2" }, { "text": "To provide h i to the augmentation module, we adopt a context encoding module (denoted as CE) proposed by . Compared with vanilla Transformers, this encoder additionally models the direction and distance information of the input, which has been demonstrated to be useful for the NER task. Therefore, the encoding procedure of the input text can be denoted as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Procedure", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H = CE(E),", "eq_num": "(6)" } ], "section": "Tagging Procedure", "sec_num": "2.3" }, { "text": "where text information from large-scale corpus, and different types of them may contain diverse information, a straightforward way of incorporating them is to concatenate their embedding vectors by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Procedure", "sec_num": "2.3" }, { "text": "H = [h 1 , h 2 , \u2022 \u2022 \u2022 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Procedure", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e i = e 1 i \u2295 e 2 i \u2295 . . . \u2295 e T i ,", "eq_num": "(7)" } ], "section": "Tagging Procedure", "sec_num": "2.3" }, { "text": "where e i is the final word embedding for x i and T the set of all embedding types. Afterwards, a trainable matrix W u is used to map u i obtained from the gate module to the output space by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Procedure", "sec_num": "2.3" }, { "text": "o i = W u \u2022 u i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Procedure", "sec_num": "2.3" }, { "text": "Finally, a conditional random field (CRF) decoder is applied to predict the labels y i \u2208 L (where L is the set with all NE labels) in the output sequence Y by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Procedure", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y i = arg max y i \u2208L exp(W c \u2022 o i + b c ) y i\u22121 y i exp(W c \u2022 o i + b c ) ,", "eq_num": "(8)" } ], "section": "Tagging Procedure", "sec_num": "2.3" }, { "text": "where W c and b c are the trainable parameters to model the transition for y i\u22121 to y i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Procedure", "sec_num": "2.3" }, { "text": "In our experiments, we use three social media benchmark datasets, including WNUT16 (W16) (Strauss et al., 2016) , WNUT17 (W17) (Derczynski et al., 2017) , and Weibo (WB) (Peng and Dredze, 2015) , where W16 and W17 are English datasets constructed from Twitter, and WB is built from Chinese social media platform (Sina Weibo). For all three datasets, we use their original splits and report the statistics of them in Table 1 (e.g., the number of sentences (#Sent.), entities (#Ent.), and the percentage of unseen entities (%Uns.) with respect to the entities appearing in the training set). For model implementation, we follow Lample et al. (2016) to use the BIOES tag schema to represent the NE labels of tokens in the input sentence. For the text input, we try two types of embeddings Table 2 : F 1 scores of the baseline model and ours enhanced with semantic augmentation (\"SE\") and the gate module (\"GA\") on the development (a) and test (b) sets. \"DS\" and \"AU \" represent the direct summation and attentive augmentation module, respectively. Y and N denote the use and non-use of corresponding modules.", "cite_spans": [ { "start": 89, "end": 111, "text": "(Strauss et al., 2016)", "ref_id": "BIBREF30" }, { "start": 127, "end": 152, "text": "(Derczynski et al., 2017)", "ref_id": "BIBREF4" }, { "start": 170, "end": 193, "text": "(Peng and Dredze, 2015)", "ref_id": "BIBREF21" }, { "start": 626, "end": 646, "text": "Lample et al. (2016)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 416, "end": 423, "text": "Table 1", "ref_id": null }, { "start": 786, "end": 793, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Settings", "sec_num": "3.1" }, { "text": "for each language. 2 Specifically, for English, we use ELMo (Peters et al., 2018) and BERT-cased large (Devlin et al., 2019) ; for Chinese, we use Tencent Embedding (Song et al., 2018b) , and ZEN (Diao et al., 2019) . 3 In the context encoding module, we use a two-layer transformer-based encoder proposed by with 128 hidden units and 12 heads. To extract similar words carrying augmented semantic information, we use the pretrained word embeddings from GloVe for English and those embedding from Tencent Embeddings for Chinese to extract the most similar 10 words (i.e., m = 10) 4 . In the augmentation module, we randomly initialize the embeddings of the extracted words (i.e., e i,j for c i,j ) to represent the semantic information carried by those words. 5 During the training process, we fix all pre-trained embeddings in the embedding layer and use Adam (Kingma and Ba, 2015) to optimize negative loglikelihood loss function with the learning rate set to \u03b7 = 0.0001, \u03b2 1 = 0.9 and \u03b2 2 = 0.99. We train 50 epochs for each method with the batch size set to 32 and tune the hyper-parameters on the development set 6 . The model that achieves the best performance on the development set is evaluated on the test set with the F 1 scores obtained from the official conlleval toolkits 7 .", "cite_spans": [ { "start": 60, "end": 81, "text": "(Peters et al., 2018)", "ref_id": "BIBREF23" }, { "start": 103, "end": 124, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 165, "end": 185, "text": "(Song et al., 2018b)", "ref_id": "BIBREF28" }, { "start": 196, "end": 215, "text": "(Diao et al., 2019)", "ref_id": "BIBREF6" }, { "start": 218, "end": 219, "text": "3", "ref_id": null }, { "start": 760, "end": 761, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "3.1" }, { "text": "To explore the effect of the proposed attentive semantic augmentation module (AU ) and the gate module (GA), we run different settings of our model with and without the modules. In addition, we also try baselines that use direct summation (DS) to leverage the semantic information carried by the similar words, where the embeddings of the words are directly summed without weighting through attentions. The experimental results (F 1) of the baselines and our approach on the development and test sets of all datasets are reported in Table 2 (a) and (b), respectively.", "cite_spans": [], "ref_spans": [ { "start": 533, "end": 540, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "There are some observations from the results on the development and test sets. First, compared to the baseline without semantic augmentation (ID=1), models using direct summation (DS, ID=2) to incorporate different semantic information undermines NER performance on two of three datasets, namely, W17 and WB; on the contrary, the models using the proposed attentive semantic augmentation module (AU , ID=4) consistently outperform the baselines (ID=1 and ID=2) on all datasets. It indicates that AU could distinguish the contributions of different semantic information carried by different words in the given context and leverage them accordingly to improve NER performance. Second, comparing the results of models with and without the gate module (GA) (i.e. ID=3 vs. ID=2 and ID=5 vs. ID=4), we find that the models with gate module achieves superior performance to the others without it. This observation suggests that the importance of the information from the context encoder and AU varies, and the proposed gate module is effective in adjusting the weights according to their contributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "Moreover, we compare our model under the best setting with previous models on all three datasets in Table 3 , where our model outperforms others on all datasets. We believe that the new state-of-the-conll2000/chunking/conlleval.txt. art performance is established. The reason could be that compared to previous studies, our model is effective to alleviate the data sparsity problem in social media NER with the augmentation module to encode augmented semantic information. Besides, the gate module can distinguish the importance of information from the context encoder and AU according to their contribution to NER.", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 107, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "Since this work focuses on addressing the data sparsity problem in social media NER, where the unseen NEs are one of the important factors that hurts model performance. To analyze whether our approach with attentive semantic augmentation (AU ) and the gate module (GA) can address this problem, we report the recall of our approach (i.e., \"+AU +GA\") to recognize the unseen NEs on the test set of all datasets in Table 4 . For reference, we also report the recall of the baseline without AU and GA, as well as our runs of previous studies (marked by \" * \"). It is clearly observed that our approach outperforms the baseline and previous studies on unseen NEs on all datasets, which shows that it can appropriately leverage semantic information carried by similar words and thus alleviate the data sparsity problem.", "cite_spans": [], "ref_spans": [ { "start": 413, "end": 420, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Performance on Unseen Named Entities", "sec_num": "4.1" }, { "text": "To demonstrate how the augmented semantic information improves NER with the attentive augmentation module and the gate module, we show the extracted augmented information for the word \"Chris\" and visualize the weights for each augmented term in Figure 3 , where deeper color refers to higher Figure 3 : An example of helping recognize the NE \"Chris\" by augmented semantic information (darker color refers to greater value). \"CE\" and \"AU \" represent the context encoder and attentive augmentation module, respectively.", "cite_spans": [], "ref_spans": [ { "start": 245, "end": 253, "text": "Figure 3", "ref_id": null }, { "start": 292, "end": 300, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Case Study", "sec_num": "4.2" }, { "text": "weight. In this case, the words \"steve\" and \"jason\" have higher weights in AU . The explanation could be that in all cases, these two words are a kind of \"Person\". Thus, higher attention to these terms helps our model to identify the correct NE label. On the contrary, the term \"anderson\" and \"andrew\" never occur in the dataset, and therefore provide no helpful effect in this case and eventually end with the lower weights in AU . In addition, a model can also mislabel \"Chris\" as \"Music-Artist\", because \"Chris\" belongs to that NE type in most cases and there is a word \"filming\" in its context. However, our model with the gate module can distinguish that the information from semantic augmentation is more important and thus make correct prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "4.2" }, { "text": "In this paper, we proposed a neural-based approach to enhance social media NER with semantic augmentation to alleviate data sparsity problem. Particularly, an attentive semantic augmentation module is suggested to encode semantic information and a gate module is applied to aggregate such information to tagging process. Experiments conducted on three benchmark datasets in English and Chinese show that our model outperforms previous studies and achieves the new state-of-the-art result. Table 5 : Experimental results (F 1 scores) of our approach with semantic augmentation (AU ) and gate module (GA) on all datasets, where only one type of embeddings is used in the embedding layer to represent the input sentence. The results of their corresponding baseline without AU and GA are also reported.", "cite_spans": [], "ref_spans": [ { "start": 489, "end": 496, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In our main experiments, we use two types of embeddings for each language: ELMo (Peters et al., 2018) and BERT-cased large (Devlin et al., 2019) for English, and Tencent Embedding (Song et al., 2018b) and ZEN (Diao et al., 2019) for Chinese. In Table 5 , we report the results (F 1 scores) of our model with the best setting (i.e. the full model with semantic augmentation (AU ) and gate module (GA)) as well as the baselines without AU and GA, where either one of the two types of embedding is used to represent the input sentence. From the results, it is found that our model with AU and GA can consistently outperforms the baseline models with different settings of embeddings. Table 6 : Experimental results (F 1 scores) of our model with AU and GA on the WB dataset, where BERT or ZEN is used as one of the two types of embeddings (the other one is Tencent Embedding) to represent the input sentence for the embedding layer.", "cite_spans": [ { "start": 80, "end": 101, "text": "(Peters et al., 2018)", "ref_id": "BIBREF23" }, { "start": 123, "end": 144, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 180, "end": 200, "text": "(Song et al., 2018b)", "ref_id": "BIBREF28" }, { "start": 209, "end": 228, "text": "(Diao et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 245, "end": 252, "text": "Table 5", "ref_id": null }, { "start": 681, "end": 688, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In our main experiments, we use ZEN (Diao et al., 2019) instead of BERT (Devlin et al., 2019) as the embedding to represent the input for Chinese. The reason is that ZEN achieves better performance compared with BERT, which is confirmed by Table 6 with its results (F 1 scores) showing the performance of our approach with the best settings (i.e. two types of embeddings with AU and GA.) on", "cite_spans": [ { "start": 36, "end": 55, "text": "(Diao et al., 2019)", "ref_id": "BIBREF6" }, { "start": 72, "end": 93, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 240, "end": 248, "text": "Table 6", "ref_id": null }, { "start": 266, "end": 278, "text": "(F 1 scores)", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We report the results of using each individual type of embeddings in Appendix A.3 We obtain the pre-trained BERT model from https:// github.com/google-research/bert, Tencent Embeddings from https://ai.tencent.com/ailab/ nlp/embedding.html, and ZEN from https:// github.com/sinovation/ZEN. Note that we use ZEN because it achieves better performance than BERT on different Chinese NLP tasks. For reference, we report the results of using BERT in Appendix B.4 The results of using other embeddings as sources to extract similar words are reported in the Appendix C.5 We also try other ways (e.g., GloVe for English and Tencent Embedding for Chinese) to initialize the word embeddings, but do not find significant differences.6 We report the details of hyperparameter settings of different models in the Appendix D.7 The script to evaluate all models in the experiments is obtained from https://www.clips.uantwerpen.be/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We obtain Word2vec from https://code.google. com/archive/p/word2vec/, GloVe from https:// nlp.stanford.edu/projects/glove/, Giga from https://github.com/jiesutd/LatticeLSTM. parameter configurations (which is also reported inTable 8) on the development set of each dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": " Table 7 : Experimental results (F 1 scores) of our best performing models (i.e., the ones with AU and GA) using different types of pre-trained embeddings as the source to extract similar words. The results of baseline (the one without AU and GA) are also reported.In addition to use embeddings for input sentence representation, we also try different embedding sources (i.e. pre-trained word embeddings) to extract similar words for each token in the input sentence. For English, we use Word2vec (Mikolov et al., 2013) and Glove ; for Chinese, we use Giga (Zhang and Yang, 2018) and Tencent Embedding (Song et al., 2018b) . 8 The experimental results of our model with the best setting (i.e., the one with AU and GA) using different sources are reported in Table 7 . The result of the baseline model without AU and GA is also reported for reference. The results show that our approach can consistently outperforms the baseline with different sources to find similar words, which demonstrates the robustness of our approach. We report all values of the hyper-parameters tried for our models in Table 8 , where we try different combinations of them and find the best hyper-", "cite_spans": [ { "start": 497, "end": 519, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF19" }, { "start": 557, "end": 579, "text": "(Zhang and Yang, 2018)", "ref_id": "BIBREF38" }, { "start": 602, "end": 622, "text": "(Song et al., 2018b)", "ref_id": "BIBREF28" }, { "start": 625, "end": 626, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 1, "end": 8, "text": "Table 7", "ref_id": null }, { "start": 758, "end": 765, "text": "Table 7", "ref_id": null }, { "start": 1094, "end": 1101, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Multi-task Approach for Named Entity Recognition in Social Media Data", "authors": [ { "first": "Gustavo", "middle": [], "last": "Aguilar", "suffix": "" }, { "first": "Suraj", "middle": [], "last": "Maharjan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 3rd Workshop on Noisy User-generated Text, NUT@EMNLP 2017", "volume": "", "issue": "", "pages": "148--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Aguilar, Suraj Maharjan, Adri\u00e1n Pastor L\u00f3pez- Monroy, and Thamar Solorio. 2017. A Multi-task Approach for Named Entity Recognition in Social Media Data. In Proceedings of the 3rd Workshop on Noisy User-generated Text, NUT@EMNLP 2017, Copenhagen, Denmark, September 7, 2017, pages 148-153.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Pooled Contextualized Embeddings for Named Entity Recognition", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Bergmann", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "724--728", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled Contextualized Embeddings for Named Entity Recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 724-728.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Contextual String Embeddings for Sequence Labeling", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1638--1649", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual String Embeddings for Sequence Labeling. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 1638-1649.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Data Augmentation using Machine Translation for Fake News Detection in the Urdu Ulanguage", "authors": [ { "first": "Maaz", "middle": [], "last": "Amjad", "suffix": "" }, { "first": "Grigori", "middle": [], "last": "Sidorov", "suffix": "" }, { "first": "Alisa", "middle": [], "last": "Zhila", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2537--2542", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maaz Amjad, Grigori Sidorov, and Alisa Zhila. 2020. Data Augmentation using Machine Translation for Fake News Detection in the Urdu Ulanguage. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2537-2542, Mar- seille, France.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Results of the WNUT2017 Shared Task on Novel and Emerging Entity Recognition", "authors": [ { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nichols", "suffix": "" }, { "first": "Marieke", "middle": [], "last": "Van Erp", "suffix": "" }, { "first": "Nut", "middle": [], "last": "Limsopatham", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 3rd Workshop on Noisy User-generated Text, NUT@EMNLP 2017", "volume": "", "issue": "", "pages": "140--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the WNUT2017 Shared Task on Novel and Emerging Entity Recogni- tion. In Proceedings of the 3rd Workshop on Noisy User-generated Text, NUT@EMNLP 2017, Copen- hagen, Denmark, September 7, 2017, pages 140- 147.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations", "authors": [ { "first": "Shizhe", "middle": [], "last": "Diao", "suffix": "" }, { "first": "Jiaxin", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yonggang", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Arxiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shizhe Diao, Jiaxin Bai, Yan Song, Tong Zhang, and Yonggang Wang. 2019. ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations. Arxiv, abs/1911.00720.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "CNN-Based Chinese NER with Lexicon Rethinking", "authors": [ { "first": "Tao", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Ruotian", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lujun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yu-Gang", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence", "volume": "2019", "issue": "", "pages": "4982--4988", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Gui, Ruotian Ma, Qi Zhang, Lujun Zhao, Yu-Gang Jiang, and Xuanjing Huang. 2019. CNN-Based Chi- nese NER with Lexicon Rethinking. In Proceed- ings of the Twenty-Eighth International Joint Confer- ence on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 4982-4988.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Dependency-Guided LSTM-CRF for Named Entity Recognition", "authors": [ { "first": "Zhanming", "middle": [], "last": "Jie", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "3860--3870", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhanming Jie and Wei Lu. 2019. Dependency-Guided LSTM-CRF for Named Entity Recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3860- 3870.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Syntax-aware Neural Semantic Role Labeling with Supertags", "authors": [ { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" }, { "first": "R", "middle": [], "last": "Dragomir", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Radev", "suffix": "" }, { "first": "", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "701--709", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jungo Kasai, Dan Friedman, Robert Frank, Dragomir R. Radev, and Owen Rambow. 2019. Syntax-aware Neural Semantic Role Labeling with Supertags. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 701-709.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Semantic Sentence Matching with Densely-Connected Recurrent and Co-Attentive Information", "authors": [ { "first": "Seonhoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Inho", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Nojun", "middle": [], "last": "Kwak", "suffix": "" } ], "year": 2019, "venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence", "volume": "2019", "issue": "", "pages": "6586--6593", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seonhoon Kim, Inho Kang, and Nojun Kwak. 2019. Semantic Sentence Matching with Densely- Connected Recurrent and Co-Attentive Information. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innova- tive Applications of Artificial Intelligence Confer- ence, IAAI 2019, The Ninth AAAI Symposium on Ed- ucational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -Febru- ary 1, 2019, pages 6586-6593.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Adam: A Method for Stochastic Optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Knowledge-Enriched Two-Layered Attention Network for Sentiment Analysis", "authors": [ { "first": "Abhishek", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "253--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Kumar, Daisuke Kawahara, and Sadao Kuro- hashi. 2018. Knowledge-Enriched Two-Layered At- tention Network for Sentiment Analysis. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 2 (Short Papers), pages 253-258, New Orleans, Louisiana.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A Closer Look at Feature Space Data Augmentation for Few-Shot Intent Classification", "authors": [ { "first": "Varun", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Hadrien", "middle": [], "last": "Glaude", "suffix": "" }, { "first": "Cyprien", "middle": [], "last": "De Lichy", "suffix": "" }, { "first": "Wlliam", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Varun Kumar, Hadrien Glaude, Cyprien de Lichy, and Wlliam Campbell. 2019. A Closer Look at Feature Space Data Augmentation for Few-Shot Intent Clas- sification. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 1-10, Hong Kong, China.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Neural Architectures for Named Entity Recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "260--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural Architectures for Named Entity Recognition. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, San Diego California, USA, June 12-17, 2016, pages 260-270.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The Stanford Corenlp Natural Language Processing Toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The Stanford Corenlp Natural Lan- guage Processing Toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Com- putational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, System Demonstrations, pages 55-60.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Attention-based Conditioning Methods for External Knowledge Integration", "authors": [ { "first": "Katerina", "middle": [], "last": "Margatina", "suffix": "" }, { "first": "Christos", "middle": [], "last": "Baziotis", "suffix": "" }, { "first": "Alexandros", "middle": [], "last": "Potamianos", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3944--3951", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katerina Margatina, Christos Baziotis, and Alexandros Potamianos. 2019. Attention-based Conditioning Methods for External Knowledge Integration. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3944- 3951, Florence, Italy.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Joint Learning of Named Entity Recognition and Entity Linking", "authors": [ { "first": "Pedro", "middle": [ "Henrique" ], "last": "Martins", "suffix": "" }, { "first": "Zita", "middle": [], "last": "Marinho", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "2", "issue": "", "pages": "190--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedro Henrique Martins, Zita Marinho, and Andr\u00e9 F. T. Martins. 2019. Joint Learning of Named Entity Recognition and Entity Linking. In Proceedings of the 57th Conference of the Association for Compu- tational Linguistics, ACL 2019, Florence, Italy, July 28 -August 2, 2019, Volume 2: Student Research Workshop, pages 190-196.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Glyce: Glyph-vectors for Chinese Character Representations", "authors": [ { "first": "Yuxian", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaoya", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ping", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Muyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qinghong", "middle": [], "last": "Han", "suffix": "" }, { "first": "Xiaofei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2742--2753", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuxian Meng, Wei Wu, Fei Wang, Xiaoya Li, Ping Nie, Fan Yin, Muyu Li, Qinghong Han, Xiaofei Sun, and Jiwei Li. 2019. Glyce: Glyph-vectors for Chinese Character Representations. In Advances in Neural Information Processing Systems 32: Annual Con- ference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancou- ver, BC, Canada, pages 2742-2753.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Efficient Estimation of Word Representations in Vector Space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "1st International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Represen- tations in Vector Space. In 1st International Con- ference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "HAS-QA: Hierarchical Answer Spans Model for Open-Domain Question Answering", "authors": [ { "first": "Liang", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Yanyan", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Lixin", "middle": [], "last": "Su", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2019, "venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019", "volume": "", "issue": "", "pages": "6875--6882", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Lixin Su, and Xueqi Cheng. 2019. HAS-QA: Hierarchi- cal Answer Spans Model for Open-Domain Ques- tion Answering. In The Thirty-Third AAAI Con- ference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial In- telligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial In- telligence, EAAI 2019, Honolulu, Hawaii, USA, Jan- uary 27 -February 1, 2019, pages 6875-6882.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Named Entity Recognition for Chinese Social Media with Jointly Trained Embeddings", "authors": [ { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "548--554", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nanyun Peng and Mark Dredze. 2015. Named Entity Recognition for Chinese Social Media with Jointly Trained Embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 548-554.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Glove: Global Vectors for Word Representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Inter- est Group of the ACL, pages 1532-1543.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Deep Contextualized Word Representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT 2018, New Or- leans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227-2237.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Named Entity Recognition in Tweets: An Experimental Study", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Mausam", "middle": [], "last": "", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "2011", "issue": "", "pages": "1524--1534", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named Entity Recognition in Tweets: An Experimental Study. In Proceedings of the 2011 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1524-1534.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Mining Discourse Markers for Unsupervised Sentence Representation Learning", "authors": [ { "first": "Damien", "middle": [], "last": "Sileo", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Van De Cruys", "suffix": "" }, { "first": "Camille", "middle": [], "last": "Pradel", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Muller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3477--3486", "other_ids": {}, "num": null, "urls": [], "raw_text": "Damien Sileo, Tim Van De Cruys, Camille Pradel, and Philippe Muller. 2019. Mining Discourse Markers for Unsupervised Sentence Representation Learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3477- 3486, Minneapolis, Minnesota.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Learning Word Representations with Regularization from Prior Knowledge", "authors": [ { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Chia-Jung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "143--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan Song, Chia-Jung Lee, and Fei Xia. 2017. Learn- ing Word Representations with Regularization from Prior Knowledge. In Proceedings of the 21st Confer- ence on Computational Natural Language Learning (CoNLL 2017), pages 143-152.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Joint Learning Embeddings for Chinese Words and their Components via Ladder Structured Networks", "authors": [ { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Shuming", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18", "volume": "", "issue": "", "pages": "4375--4381", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan Song, Shuming Shi, and Jing Li. 2018a. Joint Learning Embeddings for Chinese Words and their Components via Ladder Structured Networks. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI- 18, pages 4375-4381.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Directional Skip-Gram: Explicitly Distinguishing Left and Right Context for Word Embeddings", "authors": [ { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Shuming", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Haisong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT", "volume": "2", "issue": "", "pages": "175--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018b. Directional Skip-Gram: Explicitly Distin- guishing Left and Right Context for Word Embed- dings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 175- 180.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A Common Case of Jekyll and Hyde: The Synergistic Effect of Using Divided Source Training Data for Feature Augmentation", "authors": [ { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "623--631", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan Song and Fei Xia. 2013. A Common Case of Jekyll and Hyde: The Synergistic Effect of Using Di- vided Source Training Data for Feature Augmenta- tion. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 623-631, Nagoya, Japan.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Results of the WNUT16 Named Entity Recognition Shared Task", "authors": [ { "first": "Benjamin", "middle": [], "last": "Strauss", "suffix": "" }, { "first": "Bethany", "middle": [], "last": "Toma", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2nd Workshop on Noisy User-generated Text, NUT@COLING 2016", "volume": "", "issue": "", "pages": "138--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Strauss, Bethany Toma, Alan Ritter, Marie- Catherine de Marneffe, and Wei Xu. 2016. Results of the WNUT16 Named Entity Recognition Shared Task. In Proceedings of the 2nd Workshop on Noisy User-generated Text, NUT@COLING 2016, Osaka, Japan, December 11, 2016, pages 138-144.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Leverage Lexical Knowledge for Chinese Named Entity Recognition via Collaborative Graph Network", "authors": [ { "first": "Dianbo", "middle": [], "last": "Sui", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Shengping", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "3828--3838", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, and Shengping Liu. 2019. Leverage Lexical Knowl- edge for Chinese Named Entity Recognition via Col- laborative Graph Network. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, Novem- ber 3-7, 2019, pages 3828-3838.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Joint Chinese Word Segmentation and Partof-speech Tagging via Two-way Attentions of Autoanalyzed Knowledge", "authors": [ { "first": "Yuanhe", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ao", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Quan", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yonggang", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8286--8296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xi- aojun Quan, Tong Zhang, and Yonggang Wang. 2020a. Joint Chinese Word Segmentation and Part- of-speech Tagging via Two-way Attentions of Auto- analyzed Knowledge. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 8286-8296, Online.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Supertagging Combinatory Categorial Grammar with Attentive Graph Convolutional Networks", "authors": [ { "first": "Yuanhe", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuanhe Tian, Yan Song, and Fei Xia. 2020b. Supertag- ging Combinatory Categorial Grammar with Atten- tive Graph Convolutional Networks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Improving Constituency Parsing with Span Attention", "authors": [ { "first": "Yuanhe", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "Findings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuanhe Tian, Yan Song, Fei Xia, and Tong Zhang. 2020c. Improving Constituency Parsing with Span Attention. In Findings of the 2020 Conference on Empirical Methods in Natural Language Process- ing.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Improving Chinese Word Segmentation with Wordhood Memory Networks", "authors": [ { "first": "Yuanhe", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yonggang", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8274--8285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuanhe Tian, Yan Song, Fei Xia, Tong Zhang, and Yonggang Wang. 2020d. Improving Chinese Word Segmentation with Wordhood Memory Networks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8274-8285.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Exploiting Multiple Embeddings for Chinese Named Entity Recognition", "authors": [ { "first": "Canwen", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Feiyang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jialong", "middle": [], "last": "Han", "suffix": "" }, { "first": "Chenliang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019", "volume": "", "issue": "", "pages": "2269--2272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Canwen Xu, Feiyang Wang, Jialong Han, and Chen- liang Li. 2019. Exploiting Multiple Embeddings for Chinese Named Entity Recognition. In Proceedings of the 28th ACM International Conference on Infor- mation and Knowledge Management, CIKM 2019, Beijing, China, November 3-7, 2019, pages 2269- 2272.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "TENER: Adapting Transformer Encoder for Named Entity Recognition. arXiv", "authors": [ { "first": "Hang", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Bocao", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Xiaonan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu. 2019. TENER: Adapting Transformer Encoder for Named Entity Recognition. arXiv, abs/1911.04474.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Chinese NER Using Lattice LSTM", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018", "volume": "1", "issue": "", "pages": "1554--1564", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Jie Yang. 2018. Chinese NER Using Lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics, ACL 2018, Melbourne, Australia, July 15- 20, 2018, Volume 1: Long Papers, pages 1554-1564.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Dual Adversarial Neural Transfer for Low-Resource Named Entity Recognition", "authors": [ { "first": "Joey", "middle": [ "Tianyi" ], "last": "Zhou", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Hongyuan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Rick Siow Mong", "middle": [], "last": "Goh", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Kwok", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "3461--3471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joey Tianyi Zhou, Hao Zhang, Di Jin, Hongyuan Zhu, Meng Fang, Rick Siow Mong Goh, and Kenneth Kwok. 2019. Dual Adversarial Neural Transfer for Low-Resource Named Entity Recognition. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Pa- pers, pages 3461-3471.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "CAN-NER: Convolutional Attention Network for Chinese Named Entity Recognition", "authors": [ { "first": "Yuying", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Guoxin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "3384--3393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuying Zhu and Guoxin Wang. 2019. CAN-NER: Con- volutional Attention Network for Chinese Named Entity Recognition. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT 2019, Minneapo- lis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3384-3393.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Either BERT or ZEN is used as one of the two types of embeddings (the other type of embedding is Tencent Embedding)", "authors": [ { "first": "", "middle": [], "last": "Wb", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "WB dataset. Either BERT or ZEN is used as one of the two types of embeddings (the other type of embedding is Tencent Embedding).", "links": null } }, "ref_entries": { "TABREF3": { "text": "", "num": null, "content": "
: Comparison of F 1 scores of our best perform-
ing model (the full model with augmentation module
and gate module) with that reported in previous studies
on all English and Chinese social media datasets.
", "html": null, "type_str": "table" }, "TABREF5": { "text": "", "num": null, "content": "
: The recall of our models with and without
the attentive semantic augmentation (AU ) and the gate
module (GA) on unseen named entities (whose num-
bers are also reported at the first row) on all three
datasets. The results of our runs of previous models
(marked with \" * \") are also reported for references.
", "html": null, "type_str": "table" } } } }