|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:31:53.642572Z" |
|
}, |
|
"title": "De-identification of Privacy-related Entities in Job Postings", |
|
"authors": [ |
|
{ |
|
"first": "Kristian", |
|
"middle": [ |
|
"N\u00f8rgaard" |
|
], |
|
"last": "Jensen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "De-identification is the task of detecting privacy-related entities in text, such as person names, emails and contact data. It has been well-studied within the medical domain. The need for deidentification technology is increasing, as privacy-preserving data handling is in high demand in many domains. In this paper, we focus on job postings. We present JOB-STACK, a new corpus for de-identification of personal data in job vacancies on Stackoverflow. We introduce baselines, comparing Long-Short Term Memory (LSTM) and Transformer models. To improve upon these baselines, we experiment with contextualized embeddings and distantly related auxiliary data via multi-task learning. Our results show that auxiliary data improves de-identification performance. Surprisingly, vanilla BERT turned out to be more effective than a BERT model trained on other portions of Stackoverflow. 2 Related Work 2.1 De-identification in the Medical Domain De-identification has mostly been investigated in the medical domain (e.g., Szarvas et al. (2007);", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "De-identification is the task of detecting privacy-related entities in text, such as person names, emails and contact data. It has been well-studied within the medical domain. The need for deidentification technology is increasing, as privacy-preserving data handling is in high demand in many domains. In this paper, we focus on job postings. We present JOB-STACK, a new corpus for de-identification of personal data in job vacancies on Stackoverflow. We introduce baselines, comparing Long-Short Term Memory (LSTM) and Transformer models. To improve upon these baselines, we experiment with contextualized embeddings and distantly related auxiliary data via multi-task learning. Our results show that auxiliary data improves de-identification performance. Surprisingly, vanilla BERT turned out to be more effective than a BERT model trained on other portions of Stackoverflow. 2 Related Work 2.1 De-identification in the Medical Domain De-identification has mostly been investigated in the medical domain (e.g., Szarvas et al. (2007);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "It is becoming increasingly important to anonymize privacy-related information in text, such as person names and contact details. The task of de-identification is concerned with detecting and anononymizing such information. Traditionally, this problem has been studied in the medical domain by e.g., Szarvas et al. (2007) ; Friedrich et al. (2019) ; Trienes et al. (2020) to anonymize (or pseudo-anonymize) personidentifiable information in electronic health records (EHR). With new privacy-regulations (Section 2) de-identification is becoming more important for broader types of text. For example, a company or public institution might seek to \u2666 The authors contributed equally to this work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 321, |
|
"text": "Szarvas et al. (2007)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 347, |
|
"text": "Friedrich et al. (2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 371, |
|
"text": "Trienes et al. (2020)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "de-identify documents before sharing them. On another line, de-identification can benefit society and technology at scale. Particularly auto-regressive models trained on massive text collections pose a potential risk for exposing private or sensitive information (Carlini et al., 2019 (Carlini et al., , 2020 , and de-identification can be one way to address this.", |
|
"cite_spans": [ |
|
{ |
|
"start": 263, |
|
"end": 284, |
|
"text": "(Carlini et al., 2019", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 308, |
|
"text": "(Carlini et al., , 2020", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we analyze how effectively sequence labeling models are in identifying privacyrelated entities in job posts. To the best of our knowledge, we are the first study that investigates de-identification methods applied to job vacancies. In particular, we examine: How do Transformerbased models compare to LSTM-based models on this task (RQ1)? How does BERT compare to BERT Overflow (Tabassum et al., 2020) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 393, |
|
"end": 416, |
|
"text": "(Tabassum et al., 2020)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To what extent can we use existing medical deidentification data and Named Entity Recognition (NER) data to improve de-identification performance (RQ3)? To answer these questions, we put forth a new corpus, JOBSTACK, annotated with around 22,000 sentences in English job postings from Stackoverflow for person names, contact details, locations, and information about the profession of the job post itself.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(RQ2)?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Contributions We present JOBSTACK, the first job postings dataset with professional and personal entity annotations from Stackoverflow. Our experiments on entity de-identification with neural methods show that Transformers outperform bi-LSTMs, but surprisingly a BERT variant trained on another portion of Stackoverflow is less effective. We find auxiliary tasks from both news and the medical domain to help boost performance. Meystre et al. (2010) ; Liu et al. (2015) ; Jiang et al. (2017) ; Friedrich et al. (2019) ; Trienes et al. (2020) ) to ensure the privacy of a patient in the analysis of their medical health records. Apart from an ethical standpoint, it is also a legal requirement imposed by multiple legislations such as the US Health Insurance Portability and Accountability Act (HIPAA) (Act, 1996) and the European General Data Protection Regulation (GDPR) (Regulation, 2016).", |
|
"cite_spans": [ |
|
{ |
|
"start": 428, |
|
"end": 449, |
|
"text": "Meystre et al. (2010)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 469, |
|
"text": "Liu et al. (2015)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 472, |
|
"end": 491, |
|
"text": "Jiang et al. (2017)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 517, |
|
"text": "Friedrich et al. (2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 541, |
|
"text": "Trienes et al. (2020)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 801, |
|
"end": 812, |
|
"text": "(Act, 1996)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(RQ2)?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Many prior works in the medical domain used the I2B2/UTHealth dataset to evaluate de-identification. The dataset consists of clinical narratives, which are free-form medical texts written as a first person account by a clinician. Each of the documents describes a certain event, consultation or hospitalization. All of the texts have been annotated with a set of Protected Health Information (PHI) tags (e.g. name, profession, location, age, date, contact, IDs) and subsequently replaced by realistic surrogates. The dataset was originally developed for use in a shared task for automated de-identification systems. Systems tend to perform very well on this set, in the shared task three out of ten systems achieved F1 scores above 90 . More recently, systems reach over 98 F1 with neural models (Dernoncourt et al., 2017; Liu et al., 2017; Khin et al., 2018; Trienes et al., 2020; Johnson et al., 2020) . We took I2B2 as inspiration for annotation of JOBSTACK.", |
|
"cite_spans": [ |
|
{ |
|
"start": 796, |
|
"end": 822, |
|
"text": "(Dernoncourt et al., 2017;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 823, |
|
"end": 840, |
|
"text": "Liu et al., 2017;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 841, |
|
"end": 859, |
|
"text": "Khin et al., 2018;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 860, |
|
"end": 881, |
|
"text": "Trienes et al., 2020;", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 882, |
|
"end": 903, |
|
"text": "Johnson et al., 2020)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(RQ2)?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Past methods for de-identification in the medical domain can be categorised in three categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(RQ2)?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) Rule-based approaches, (2) traditional machine learning (ML)-based systems (e.g., featurebased Conditional Random Fields (CRFs) (Lafferty et al., 2001) , ensemble combining CRF and rules, data augmentation, clustering), and (3) neural-based approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 155, |
|
"text": "(Lafferty et al., 2001)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(RQ2)?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Rule-based First, Gupta et al. (2004) made use of a set of rules, dictionaries, and fuzzy string matching to identify protected health information (PHI). In a similar fashion, Neamatullah et al. (2008) used lexical look-up tables, regular expressions, and heuristics to find instances of PHI.", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 37, |
|
"text": "Gupta et al. (2004)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 201, |
|
"text": "Neamatullah et al. (2008)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(RQ2)?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Traditional ML Second, classical ML approaches employ feature-based CRFs (Aberdeen et al., 2010; He et al., 2015) . Moreover, earlier work showed the use of CRFs in an ensemble with rules . Other ML approaches include data augmentation by McMurry et al. (2013) , where they added public medical texts to properly distinguish common medical words and phrases from PHI and trained decision trees on the augmented data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 96, |
|
"text": "(Aberdeen et al., 2010;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 97, |
|
"end": 113, |
|
"text": "He et al., 2015)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 239, |
|
"end": 260, |
|
"text": "McMurry et al. (2013)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(RQ2)?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Neural methods Third, regarding neural methods, Dernoncourt et al. (2017) were the first to use Bi-LSTMs, which they used in combination with character-level embeddings. Similarly, Khin et al. (2018) performed de-identification by using a Bi-LSTM-CRF architecture with ELMo embeddings (Peters et al., 2018) . Liu et al. (2017) used four individual methods (CRF-based, Bi-LSTM, Bi-LSTM with features, and rule-based methods) for de-identification, and used an ensemble learning method to combine all PHI instances predicted by the three methods. Trienes et al. (2020) opted for a Bi-LSTM-CRF as well, but applied it with contextual string embeddings (Akbik et al., 2018) . Most recently, Johnson et al. (2020) fine-tuned BERT base and BERT large (Devlin et al., 2019) for de-identification. Next to \"vanilla\" BERT, they experiment with fine-tuning different domain specific pre-trained language models, such as SciB-ERT (Beltagy et al., 2019) and BioBERT (Lee et al., 2020) . They achieve state-of-the art performance in de-identification on the I2B2 dataset with the fine-tuned BERT large model. From a different perspective, the approach of Friedrich et al. (2019) is based on adversarial learning, which automatically pseudo-anonymizes EHRs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 73, |
|
"text": "Dernoncourt et al. (2017)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 199, |
|
"text": "Khin et al. (2018)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 306, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 326, |
|
"text": "Liu et al. (2017)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 545, |
|
"end": 566, |
|
"text": "Trienes et al. (2020)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 649, |
|
"end": 669, |
|
"text": "(Akbik et al., 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 745, |
|
"end": 766, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 919, |
|
"end": 941, |
|
"text": "(Beltagy et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 954, |
|
"end": 972, |
|
"text": "(Lee et al., 2020)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1142, |
|
"end": 1165, |
|
"text": "Friedrich et al. (2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(RQ2)?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Data protection in general however is not only limited to the medical domain. Even though work outside the clinical domain is rare, personal and sensitive data is in abundance in all kinds of data. For example, Eder et al. (2019) pseudonymised German emails. Bevendorff et al. (2020) published a large preprocessed email corpus, where only the email addresses themselves where anonymized. Apart from emails, several works went into deidentification of SMS messages (Treurniet et al., 2012; Patel et al., 2013; Chen and Kan, 2013) in Dutch, French, English and Mandarin respectively. Both Treurniet et al. (2012) ; Chen and Kan (2013) conducted the same strategy and automatically anonymized all occurrences of dates, times, decimal amounts, and numbers with more than one digit (telephone numbers, bank accounts, et cetera), email addresses, URLs, and IP ad- dresses. All sensitive information was replaced with a placeholder. Patel et al. (2013) introduced a system to anonymize SMS messages by using dictionaries. It uses a dictionary of first names and anti-dictionaries (of ordinary language and of some forms of SMS writing) to identify the words that require anonymization. In our work, we study de-identification for names, contact information, addresses, and professions, as further described in Section 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 283, |
|
"text": "Bevendorff et al. (2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 489, |
|
"text": "(Treurniet et al., 2012;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 509, |
|
"text": "Patel et al., 2013;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 510, |
|
"end": 529, |
|
"text": "Chen and Kan, 2013)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 611, |
|
"text": "Treurniet et al. (2012)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 614, |
|
"end": 633, |
|
"text": "Chen and Kan (2013)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 927, |
|
"end": 946, |
|
"text": "Patel et al. (2013)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "De-identification in other Domains", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In this section, we describe the JOBSTACK dataset. There are two basic approaches to remove privacybearing data from the job postings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "JOBSTACK Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "First, anonymization identifies instances of personal data (e.g. names, email addresses, phone numbers) and replaces these strings by some placeholder (e.g. {name}, {email}, {phone}). The second approach, pseudonymisation preserves the information of personal data by replacing these privacy-bearing strings with randomly chosen alternative strings from the same privacy type (e.g. replacing a name with \"John Doe\"). The term deidentification subsumes both anonymization and pseudonymisation. In this work, we focus on anonymization. 1 Eder et al. (2019) argues that the anonymization approach might be appropriate to eliminate privacy-bearing data in the medical domain, but 1 Meystre (2015) notes that de-identification means removing or replacing personal identifiers to make it difficult to reestablish a link between the individual and his or her data, but it does not make this link impossible. would be inappropriate for most Natural Language Processing (NLP) applications since crucial discriminative information and contextual clues will be erased by anonymization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 534, |
|
"end": 535, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "JOBSTACK Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "If we shift towards pseudonymisation, we argue that there is still the possibility to resurface the original personal data. Henceforth, our goal is to anonimize job postings to the extent that one would not be able to easily identify a company from the job posting. However, as job postings are public, we are aware that it would be simple to find the original company that posted it with a search engine. Nevertheless, we abide to the GDPR compliance which requires us to protect the personal data and privacy of EU citizens for transactions that occur within EU member states (Regulation, 2016) . In job postings this would be the names of employees, and their corresponding contact information. 2 Over a period of time, we scraped 2,755 job postings from Stackoverflow and selected 395 documents to annotate, the subset ranges from June 2020 to September 2020. We manually annotated the job postings with the following five entities: Organization, Location, Contact, Name, and Profession.", |
|
"cite_spans": [ |
|
{ |
|
"start": 578, |
|
"end": 596, |
|
"text": "(Regulation, 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 698, |
|
"end": 699, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "JOBSTACK Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To make the task as realistic as possible, we kept all sentences in the documents. The statistics pro- vided in the following therefore reflect the natural distribution of entities in the data. A snippet of an example job post can be seen in Figure 1 , the full job posting can be found in Appendix A. Table 1 shows the statistics of our dataset. We split our data in 80% train, 10% development, and 10% test. Besides of a regular documentlevel random split, ours is further motivated based on time. The training set covers the job posts posted between June to August 2020 and the development-and test set are posted in September 2020. To split the text into sentences, we use the sentence-splitter library used for processing the Europarl corpus (Koehn, 2005) . In the training set, we see that the average number of sentences is higher than in the development-and test set (6-7 more). We therefore also calculate the density of the entities, meaning the percentage of sentences with at least one entity. The table shows that 14.5% of the sentences in JOBSTACK contain at least one entity. Note that albeit having document boundaries, we treat the task of deidentification as a standard word-level sequence labeling task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 747, |
|
"end": 760, |
|
"text": "(Koehn, 2005)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 250, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 309, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "JOBSTACK Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The aforementioned entity tags are based on the English I2B2/UTHealth corpus . The tags are more coarse-grained than the I2B2 tags. For example, we do not distinguish between zip code and city, but tag them with Location. We give a brief explanation of the tags. Organization: This includes all companies and their legal entity mentioned in the job postings. The tag is not limited to the company that authored the job posting, but does also include men-tions of stakeholders or any other company. Location: This is the address of the company in the job posting. The location also refers to all other addresses, zip codes, cities, regions, and countries mentioned throughout the text. This is not limited to the company address, but should be used for all location names in the job posting, including abbreviations. Contact: The label includes, URLs, email addresses and phone numbers. This could be, but is not limited to, contact info of an employee from the authoring company. Name: This label covers names of people. This could be, but is not limited to, a person from the company, such as the contact person, CEO, or the manager. All names appearing in the job posting should be annotated no matter the relation to the job posting itself. Titles such as Dr. are not part of the annotation. Apart from people names in our domain, difficulties could arise with other type of names. An example would be project names, with which one could identify a company. In this work, we did not annotate such names. Profession: This label covers the profession that is being searched for in the job posting or desired prior relevant jobs for the current profession. We do not annotate additional meta information such as gender (e.g. Software Engineer (f/m)). We also do not annotate mentions of colleague positions in neither singular or plural form. For example: \"As a Software Engineer, you are going to work with Security Engineers\". Here we annotate Software Engineer as profession, but we do not annotate Security Engineers. While this may sound straightforward, however, there are difficulties in regards to annotating professions. A job posting is free text, meaning that one can write anything they prefer to make the job posting as clear as possible (e.g., Software Engineer (at a unicorn start-up based in [..]). The opposite is also possible, when they are looking for one applicant to fill in one of multiple positions. For example, \"We are looking for an applicant to fill in the position of DevOps/Software Engineer\". From our interpretation, they either want a \"DevOps Engineer\" or a \"Software Engineer\". We decided to annotate the full string of characters \"DevOps/Software Engineer\" as a profession.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Schema", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Token Entity Unlabeled A1 -A2 0.889 0.767 0.892 A1 -A3 0.898 0.782 0.904 A2 -A3 0.917 0.823 0.920 Fleiss' \u03ba 0.902 0.800 0.906 Table 2 : Inter-annotator agreement of the annotators. We show agreement over pairs with Cohen's \u03ba and all annotators with Fleiss' \u03ba.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 133, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Quality", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To evaluate our annotation guidelines, a sample of the data was annotated by three annotators, one with a background in Linguistics (A1) and two with a background in Computer Science (A2, A3). We used an open source text annotation tool named Doccano (Nakayama et al., 2018) . There are around 1,500 overlapping sentences that we calculated agreement on. The annotations were compared using Cohen's \u03ba (Fleiss and Cohen, 1973) between pairs of annotators, and Fleiss' \u03ba (Fleiss, 1971), which generalises Cohen's \u03ba to more than two concurrent annotations. Table 2 shows three levels of \u03ba calculations, we follow Balasuriya et al. (2009) 's approach of calculating agreement in NER. (1) Token is calculated on the token level, comparing the agreement of annotators on each token (including nonentities) in the annotated dataset. (2) Entity is calculated on the agreement between named entities alone, excluding agreement in cases where all annotators agreed that a token was not a namedentity. (3) Unlabeled refers to the agreement between annotators on the exact span match over the surface string, regardless of the type of named entity (i.e., we only check the position of tag without regarding the type of the named entity). Landis and Koch (1977) state that a \u03ba value greater than 0.81 indicates almost perfect agreement. Given this, all annotators are in strong agreement.", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 274, |
|
"text": "(Nakayama et al., 2018)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 425, |
|
"text": "(Fleiss and Cohen, 1973)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 610, |
|
"end": 634, |
|
"text": "Balasuriya et al. (2009)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1226, |
|
"end": 1248, |
|
"text": "Landis and Koch (1977)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 554, |
|
"end": 561, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Quality", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "After this annotation quality estimation, we finalized the guidelines. They formed the basis for the professional linguist annotator, who annotated and finalized the entire final JOBSTACK dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Quality", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For entity de-identification we use a classic Named Entity Recognition (NER) approach using a Bi-LSTM with a CRF layer. On top of this we evaluate the performance of Transformerbased models with two different pre-trained BERT variants. Furthermore, we evaluate the helpfulness of auxiliary tasks, both using data close to our domain, such as de-identification of medical notes, and more general NER, which covers only a subset of the entities. Further details on the data are given in Section 4.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Firstly, we test a Bi-LSTM sequence tagger (Bilty) (Plank et al., 2016) , both with and without a CRF layer. The architecture is similar to the widely used models in previous works. For example, preliminary results of Bilty versus Trienes et al. (2020) show accuracy almost identical to each other: 99.62% versus 99.76%. Next we test a Transformer based model, namely the MaChAmp (van der Goot et al., 2021) toolkit. Current research shows good results for NER using a Transformer model without a CRF layer ), hence we tested MaChAmp both with and without a CRF layer for predictions. For both models, we use their default parameters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 71, |
|
"text": "(Plank et al., 2016)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 252, |
|
"text": "Trienes et al. (2020)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For embeddings, we tested with no pre-trained embeddings, pre-trained Glove (Pennington et al., 2014) embeddings, and Transformer-based pretrained embeddings. For Transformer-based embeddings we focused our attention on two BERT models, BERT base (Devlin et al., 2019) and BERT Overflow (Tabassum et al., 2020) . When using the Transformer-based embeddings with the Bi-LSTM, the embeddings were fixed and did not get updated during training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 101, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 268, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 310, |
|
"text": "(Tabassum et al., 2020)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embeddings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Using the MaChAmp (van der Goot et al., 2021) toolkit, we fine-tune the BERT variant with a Transformer encoder. For the Bi-LSTM sequence tagger, we first derive BERT representations as input to the tagger. The tagger further uses word Table 4 : Performance of multi-task learning on the development set across three runs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 243, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Embeddings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "and character embeddings which are updated during model training. The BERT Overflow model is a transformer with the same architecture as BERT base . It has been trained from scratch on a large corpus of text from the Q&A section of Stackoverflow, making it closer to our text domain than the \"vanilla\" BERT model. However, BERT Overflow is not trained on the job postings portion of Stackoverflow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embeddings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Both the Bi-LSTM (Plank et al., 2016) and the MaChAmp (van der Goot et al., 2021) toolkit are capable of Multi Task Learning (MTL) (Caruana, 1997) . We therefore, set up a number of experiments testing the impact of three different auxiliary tasks. The auxiliary tasks and their datasets are as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 37, |
|
"text": "(Plank et al., 2016)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 131, |
|
"end": 146, |
|
"text": "(Caruana, 1997)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auxiliary tasks", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 I2B2/UTHealth (Stubbs and Uzuner, 2015) -Medical de-identification;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auxiliary tasks", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 CoNLL 2003 (Sang and De Meulder, 2003) -News Named Entity Recognition;", |
|
"cite_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 40, |
|
"text": "De Meulder, 2003)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auxiliary tasks", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 The combination of the above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auxiliary tasks", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The data of the two tasks are similar to our dataset in two different ways. The I2B2 lies in a different text domain, namely medical notes, however, the label set of the task is close to our label set, as mentioned in Section 3.2. For CoNLL, we have a general corpus of named entities but fewer types (location, organization, person, and miscellaneous), but the text domain is presumably closer to our data. We test the impact of using both auxiliary tasks along with our own dataset. Table 5 : Evaluation of the best performing models on the test set across three runs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 485, |
|
"end": 492, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Auxiliary tasks", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Here we will outline the results of the experiments described in Section 4. All results are mean scores across three different runs. 3 The metrics are all calculated using the conlleval script 4 from the original CoNLL-2000 shared task. Table 3 shows the results from training on JOBSTACK only, Table 4 shows the results of the MTL experiments described in Section 4.3. Both report results on the development set. Lastly, Table 5 shows the scores from evaluating selected best models as found on the development set, when tested on the final heldout test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 134, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 244, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 303, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 430, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Is a CRF layer necessary? In Table 3 , as expected, adding the CRF for the Bi-LSTM clearly helps, and consistently improves precision and thereby F1 score. For the stronger BERT model the overall improvement is smaller and does not necessarily stem from higher precision. We note that on average across the three seed runs, MaChAmp with BERT base and no CRF mistakenly adds an I-tag following an O-tag 8 times out of 426 gold entities. In contrast, the MaChAmp with BERT base and CRF, makes no such mistake in any of its three seed runs. Earlier research, such as Souza et al. (2019) show that BERT models with a CRF layer improve or perform similarly to its simpler variants when comparing the overall F1 scores. Similarly, they note that in most cases it shows higher precision scores but lower recall, as in our results for the development set. However, interestingly, the precision drops during test for the Transformer-based model. As the overall F1 score increases slightly, we use the CRF layer in all subsequent experiments. The main take-away here is that both models benefit from an added CRF layer for the task, but the Transformer model to a smaller degree.", |
|
"cite_spans": [ |
|
{ |
|
"start": 564, |
|
"end": 583, |
|
"text": "Souza et al. (2019)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 36, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "LSTM versus Transformer Initially, LSTM networks dominated the de-identification field in the medical domain. Up until recently, large-scale pre-trained language models have been ubiquitous in NLP, although rarely used in this field. On both development and test results ( Table 3, Table 5 ), we show that a Transformer-based model outperforms the LSTM-based approaches with noncontextualized and contextualized representations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 290, |
|
"text": "Table 3, Table 5", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Poor performance with BERT Overflow BERT base is the best embedding method among all experiments using Bilty, with BERT Overflow being the worst with a considerable margin. Being able to fine-tune BERT base does give a good increase in performance overall. The same trend is apparent with fine-tuning BERT Overflow , but it is not enough to catch up with BERT base . We see that overall MaChAmp with BERT base and CRF is the best model. However, Bilty with BERT base and CRF does have the best precision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We hypothesized the domain-specific BERT Overflow representations would be beneficial for this task. Intuitively, BERT Overflow would help with detecting profession entities. Profession entities contain specific skills related to the IT domain, such as Python developer, Rust developer, Scrum master. Although the corpus it is trained on is not one-to-one to our vacancy domain, we expected to see at most a slight performance drop. This is not the case, as the drop in performance turned out to be high. It is not fully clear to us why this is the case. It could be the Q&A data it is trained on consists of more informal dialogue than in job postings. In the future, we would like to compare these results to training a BERT model on job postings data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Auxiliary data increases performance Looking at the results from the auxiliary experiments in Table 4 we see that all auxiliary sources are ben-eficial, for both types of models. A closer look reveals that once again MaChAmp with BERT base is the best performer across all three auxiliary tasks. Also, we see that Bilty with BERT base has good precision, though not the best this time around. For a task like de-identification recall is preferable, thereby showing that fine-tuning BERT is better than the classic Bi-LSTM-CRF. Moreover, we see that BERT Overflow is under-performing compared to BERT base . However, BERT Overflow is able to get a four point increase in F1 with I2B2 as auxiliary task in MaChAmp. For Bilty with BERT Overflow we see a slightly greater gain with both CoNLL and I2B2 as auxiliary tasks. When comparing the auxiliary data sources to each other, we note that the closer text domain (CoNLL news) is more beneficial than the closer label set (I2B2) from a more distant medical text source. This is consistent for the strongest models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 101, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In general, it can be challenging to train multitask networks that outperform or even match their single-task counterparts (Alonso and Plank, 2017; Clark et al., 2019) . Ruder (2017) mentions training on a large number of tasks is known to help regularize multi-task models. A related benefit of MTL is the transfer of learned \"knowledge\" between closely related tasks. In our case, it has been beneficial to add auxiliary tasks to improve our performance on both development and test compared to a single task setting. In particular, it seemed to have helped with pertaining a high recall score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 147, |
|
"text": "Plank, 2017;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 148, |
|
"end": 167, |
|
"text": "Clark et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Performance on the test set Finally, we evaluate the best performing models on our held out test set. The best models are selected based on their performance on F1, precision, and recall. The results are seen in Table 5 . Comparing the results to those seen in Table 3 and Table 4 it is clear to see that Bilty with BERT base sees a smaller drop in F1 compared to that of MaChAmp with BERT base . We do also see an increase in recall for Bilty compared to its performance on the development set. In general we see that recall for each model is staying quite stable without any significant drops. It is also interesting to see that, the internal ranking between MTL MaChAmp with BERT base has changed, with JOBSTACK + I2B2 being the best performing model in terms of F1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 219, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 280, |
|
"text": "Table 3 and Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Per-entity Analysis In Table 6 , we show a deeper analysis on the test set: the performance of the two different auxiliary tasks in a multi- Table 6 : Performance of the two different auxiliary tasks. Reported is the F1, Precision (P), and Recall (R) per entity. The number behind the entity name is the gold label instances in the test set. task learning setting, namely CoNLL and I2B2. We hypothesized different performance gains with each auxiliary task. For I2B2, we expected Contact and Profession to do better than CoNLL, since I2B2 contains contact information entities (e.g., phone numbers, emails, et cetera) and professions of patients. Surprisingly, this is not the case for Contact, as CoNLL outperforms I2B2 on all three metrics. We do note however this result could be due to little instances of Contact and Name being present in the gold test set. Additionally, both named entities are predicted six to nine times by both models on all three runs on the test set. This could indicate the strong difference in performance. For Profession, it shows that I2B2 is beneficial for this particular named entity as expected. For the other three named entities, the performance is similar. As Location, Name, and Organization are in both datasets, we did not expect any difference in performance. The results confirm this intuition.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 30, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 148, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this work we introduce JOBSTACK, a dataset for de-identification of English Stackoverflow job postings. Our implementation is publicly available. 5 The dataset is freely available upon request.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 150, |
|
"text": "5", |
|
"ref_id": "BIBREF51" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We present neural baselines based on LSTM and Transformer models. Our experiments show the following: (1) Transformer-based models consistently outperform Bi-LSTM-CRF-based models that have been standard for de-identification in the medical domain (RQ1). (2) Stackoverflowrelated BERT representations are not more effective than regular BERT representations on Stackoverflow job postings for de-identification (RQ2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "(3) MTL experiments with BERT representations and related auxiliary data sources improve our de-identification results (RQ3); the auxiliary task trained on the closer text type was the most beneficial, yet results improved with both auxiliary data sources. This shows the benefit of using multi-task learning for de-identification in job vacancy data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://ec.europa.eu/info/law/ law-topic/data-protection/reform/ rules-business-and-organisations/ application-regulation/ do-data-protection-rules-apply-data-about-company_ en", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We sampled three random seeds: 3477689, 4213916, 8749520 which are used for all experiments.4 https://www.clips.uantwerpen.be/ conll2000/chunking/output.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/kris927b/JobStack", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the NLPnorth group for feedback on an earlier version of this paper. We would also like to thank the anonymous reviewers for their comments to improve this paper. Last, we also thank NVIDIA and the ITU High-performance Computing cluster for computing resources. This research is supported by the Independent Research Fund Denmark (DFF) grant 9131-00019B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The mitre identification scrubber toolkit: design, training, and assessment", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Aberdeen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bayer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Reyyan", |
|
"middle": [], |
|
"last": "Yeniterzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Wellner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cheryl", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Hanauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bradley", |
|
"middle": [], |
|
"last": "Malin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lynette", |
|
"middle": [], |
|
"last": "Hirschman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "International journal of medical informatics", |
|
"volume": "79", |
|
"issue": "12", |
|
"pages": "849--859", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Aberdeen, Samuel Bayer, Reyyan Yeniterzi, Ben Wellner, Cheryl Clark, David Hanauer, Bradley Ma- lin, and Lynette Hirschman. 2010. The mitre identi- fication scrubber toolkit: design, training, and as- sessment. International journal of medical infor- matics, 79(12):849-859.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Accountability Act. 1996. Health insurance portability and accountability act of 1996", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Public law", |
|
"volume": "104", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Accountability Act. 1996. Health insurance portability and accountability act of 1996. Public law, 104:191.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Contextual string embeddings for sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duncan", |
|
"middle": [], |
|
"last": "Blythe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1638--1649", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "When is multitask learning effective? semantic sequence prediction under varying data conditions", |
|
"authors": [ |
|
{ |
|
"first": "Alonso", |
|
"middle": [], |
|
"last": "H\u00e9ctor Mart\u00ednez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "44--53", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H\u00e9ctor Mart\u00ednez Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic se- quence prediction under varying data conditions. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 44-53.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Named entity recognition in wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Dominic", |
|
"middle": [], |
|
"last": "Balasuriya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicky", |
|
"middle": [], |
|
"last": "Ringland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Nothman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tara", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James R", |
|
"middle": [], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources (People's Web)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dominic Balasuriya, Nicky Ringland, Joel Nothman, Tara Murphy, and James R Curran. 2009. Named entity recognition in wikipedia. In Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Re- sources (People's Web), pages 10-18.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Scibert: A pretrained language model for scientific text", |
|
"authors": [ |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arman", |
|
"middle": [], |
|
"last": "Cohan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3606--3611", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3606- 3611.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Crawling and preprocessing mailing lists at scale for dialog analysis", |
|
"authors": [ |
|
{ |
|
"first": "Janek", |
|
"middle": [], |
|
"last": "Bevendorff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Al", |
|
"middle": [], |
|
"last": "Khalid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Khatib", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benno", |
|
"middle": [], |
|
"last": "Potthast", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Stein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1151--1158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janek Bevendorff, Khalid Al Khatib, Martin Potthast, and Benno Stein. 2020. Crawling and preprocessing mailing lists at scale for dialog analysis. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1151-1158.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The secret sharer: Evaluating and testing unintended memorization in neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Carlini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u00dalfar", |
|
"middle": [], |
|
"last": "Erlingsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jernej", |
|
"middle": [], |
|
"last": "Kos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dawn", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "28th {USENIX} Security Symposium ({USENIX} Security 19)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "267--284", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicholas Carlini, Chang Liu,\u00dalfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The secret sharer: Eval- uating and testing unintended memorization in neu- ral networks. In 28th {USENIX} Security Sympo- sium ({USENIX} Security 19), pages 267-284.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Ulfar Erlingsson, et al. 2020. Extracting training data from large language models", |
|
"authors": [ |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Carlini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Tramer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Jagielski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ariel", |
|
"middle": [], |
|
"last": "Herbert-Voss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2012.07805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ul- far Erlingsson, et al. 2020. Extracting training data from large language models. arXiv preprint arXiv:2012.07805.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Multitask learning. Machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Caruana", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "41--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Creating a live, public short message service corpus: the nus sms corpus. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "299--335", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Chen and Min-Yen Kan. 2013. Creating a live, public short message service corpus: the nus sms corpus. Language Resources and Evaluation, 47(2):299-335.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Bam! born-again multi-task networks for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Urvashi", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.04829" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Clark, Minh-Thang Luong, Urvashi Khandel- wal, Christopher D Manning, and Quoc V Le. 2019. Bam! born-again multi-task networks for natural language understanding. arXiv preprint arXiv:1907.04829.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "De-identification of patient notes with recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Franck", |
|
"middle": [], |
|
"last": "Dernoncourt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji", |
|
"middle": [ |
|
"Young" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ozlem", |
|
"middle": [], |
|
"last": "Uzuner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Szolovits", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "24", |
|
"issue": "3", |
|
"pages": "596--606", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franck Dernoncourt, Ji Young Lee, Ozlem Uzuner, and Peter Szolovits. 2017. De-identification of pa- tient notes with recurrent neural networks. Journal of the American Medical Informatics Association, 24(3):596-606.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "De-identification of emails: Pseudonymizing privacy-sensitive data in a german email corpus", |
|
"authors": [ |
|
{ |
|
"first": "Elisabeth", |
|
"middle": [], |
|
"last": "Eder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulrike", |
|
"middle": [], |
|
"last": "Krieg-Holz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Udo", |
|
"middle": [], |
|
"last": "Hahn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "259--269", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elisabeth Eder, Ulrike Krieg-Holz, and Udo Hahn. 2019. De-identification of emails: Pseudonymiz- ing privacy-sensitive data in a german email corpus. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 259-269.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Measuring nominal scale agreement among many raters", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Joseph", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fleiss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1971, |
|
"venue": "Psychological bulletin", |
|
"volume": "76", |
|
"issue": "5", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph L Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological bulletin, 76(5):378.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Joseph", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Fleiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "613--619", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph L Fleiss and Jacob Cohen. 1973. The equiv- alence of weighted kappa and the intraclass corre- lation coefficient as measures of reliability. Educa- tional and psychological measurement, 33(3):613- 619.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Adversarial learning of privacy-preserving text representations for deidentification of medical records", |
|
"authors": [ |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Friedrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arne", |
|
"middle": [], |
|
"last": "K\u00f6hn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregor", |
|
"middle": [], |
|
"last": "Wiedemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5829--5839", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1584" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Max Friedrich, Arne K\u00f6hn, Gregor Wiedemann, and Chris Biemann. 2019. Adversarial learning of privacy-preserving text representations for de- identification of medical records. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5829-5839, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Massive Choice, Ample tasks (MaChAmp): A toolkit for multi-task learning in NLP", |
|
"authors": [ |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Van Der Goot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ahmet\u00fcst\u00fcn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ibrahim", |
|
"middle": [], |
|
"last": "Ramponi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Sharaf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the Software Demonstrations of the 16th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rob van der Goot, Ahmet\u00dcst\u00fcn, Alan Ramponi, Ibrahim Sharaf, and Barbara Plank. 2021. Massive Choice, Ample tasks (MaChAmp): A toolkit for multi-task learning in NLP. In Proceedings of the Software Demonstrations of the 16th Conference of the European Chapter of the Association for Compu- tational Linguistics. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Evaluation of a deidentification (de-id) software engine to share pathology reports and clinical documents for research", |
|
"authors": [ |
|
{ |
|
"first": "Dilip", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melissa", |
|
"middle": [], |
|
"last": "Saul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Gilbertson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "American journal of clinical pathology", |
|
"volume": "121", |
|
"issue": "2", |
|
"pages": "176--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dilip Gupta, Melissa Saul, and John Gilbertson. 2004. Evaluation of a deidentification (de-id) software en- gine to share pathology reports and clinical docu- ments for research. American journal of clinical pathology, 121(2):176-186.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Crfs based de-identification of medical records", |
|
"authors": [ |
|
{ |
|
"first": "Bin", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Guan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianyi", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keting", |
|
"middle": [], |
|
"last": "Cen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenlan", |
|
"middle": [], |
|
"last": "Hua", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Journal of biomedical informatics", |
|
"volume": "58", |
|
"issue": "", |
|
"pages": "39--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bin He, Yi Guan, Jianyi Cheng, Keting Cen, and Wen- lan Hua. 2015. Crfs based de-identification of med- ical records. Journal of biomedical informatics, 58:S39-S46.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "De-identification of medical records using conditional random fields and long short-term memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Zhipeng", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chao", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bin", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Guan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingchi", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of biomedical informatics", |
|
"volume": "75", |
|
"issue": "", |
|
"pages": "43--53", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhipeng Jiang, Chao Zhao, Bin He, Yi Guan, and Jingchi Jiang. 2017. De-identification of medical records using conditional random fields and long short-term memory networks. Journal of biomedi- cal informatics, 75:S43-S53.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Deidentification of free-text medical records using pre-trained bidirectional transformers", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Alistair", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom J", |
|
"middle": [], |
|
"last": "Bulgarelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pollard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the ACM Conference on Health, Inference, and Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "214--221", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alistair EW Johnson, Lucas Bulgarelli, and Tom J Pol- lard. 2020. Deidentification of free-text medical records using pre-trained bidirectional transformers. In Proceedings of the ACM Conference on Health, Inference, and Learning, pages 214-221.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A deep learning architecture for deidentification of patient notes: Implementation and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Kaung", |
|
"middle": [], |
|
"last": "Khin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Burckhardt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.01570" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaung Khin, Philipp Burckhardt, and Rema Pad- man. 2018. A deep learning architecture for de- identification of patient notes: Implementation and evaluation. arXiv preprint arXiv:1810.01570.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Europarl: A parallel corpus for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "MT summit", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "79--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, vol- ume 5, pages 79-86. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando Cn", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "282--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John D Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth In- ternational Conference on Machine Learning, pages 282-289.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The measurement of observer agreement for categorical data", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Landis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gary G", |
|
"middle": [], |
|
"last": "Koch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "biometrics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "159--174", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J Richard Landis and Gary G Koch. 1977. The mea- surement of observer agreement for categorical data. biometrics, pages 159-174.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Bioinformatics", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "1234--1240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Automatic de-identification of electronic medical records using token-level and character-level conditional random fields", |
|
"authors": [ |
|
{ |
|
"first": "Zengjian", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yangxin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Buzhou", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaolong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qingcai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haodi", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiwen", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suisong", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Journal of biomedical informatics", |
|
"volume": "58", |
|
"issue": "", |
|
"pages": "47--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zengjian Liu, Yangxin Chen, Buzhou Tang, Xiaolong Wang, Qingcai Chen, Haodi Li, Jingfeng Wang, Qiwen Deng, and Suisong Zhu. 2015. Automatic de-identification of electronic medical records using token-level and character-level conditional random fields. Journal of biomedical informatics, 58:S47- S52.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "De-identification of clinical notes via recurrent neural network and conditional random field", |
|
"authors": [ |
|
{ |
|
"first": "Zengjian", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Buzhou", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaolong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qingcai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of biomedical informatics", |
|
"volume": "75", |
|
"issue": "", |
|
"pages": "34--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zengjian Liu, Buzhou Tang, Xiaolong Wang, and Qingcai Chen. 2017. De-identification of clinical notes via recurrent neural network and conditional random field. Journal of biomedical informatics, 75:S34-S42.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot", |
|
"authors": [ |
|
{ |
|
"first": "Louis", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro Javier Ortiz", |
|
"middle": [], |
|
"last": "Su\u00e1rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoann", |
|
"middle": [], |
|
"last": "Dupont", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Romary", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7203--7219", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.645" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Or- tiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Improved de-identification of physician notes through integrative modeling of both public and private medical text. BMC medical informatics and decision making", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Britt", |
|
"middle": [], |
|
"last": "Mcmurry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guergana", |
|
"middle": [], |
|
"last": "Fitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Savova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben Y", |
|
"middle": [], |
|
"last": "Isaac S Kohane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Reis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew J McMurry, Britt Fitch, Guergana Savova, Isaac S Kohane, and Ben Y Reis. 2013. Improved de-identification of physician notes through integra- tive modeling of both public and private medical text. BMC medical informatics and decision mak- ing, 13(1):112.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "De-identification of unstructured clinical data for patient privacy protection", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Stephane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Meystre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Medical Data Privacy Handbook", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "697--716", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephane M Meystre. 2015. De-identification of un- structured clinical data for patient privacy protec- tion. In Medical Data Privacy Handbook, pages 697-716. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Automatic de-identification of textual documents in the electronic health record: a review of recent research", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Stephane M Meystre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Friedlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Brett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuying", |
|
"middle": [], |
|
"last": "South", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Samore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "BMC medical research methodology", |
|
"volume": "10", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephane M Meystre, F Jeffrey Friedlin, Brett R South, Shuying Shen, and Matthew H Samore. 2010. Au- tomatic de-identification of textual documents in the electronic health record: a review of recent research. BMC medical research methodology, 10(1):70.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "doccano: Text annotation tool for human", |
|
"authors": [ |
|
{ |
|
"first": "Hiroki", |
|
"middle": [], |
|
"last": "Nakayama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takahiro", |
|
"middle": [], |
|
"last": "Kubo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junya", |
|
"middle": [], |
|
"last": "Kamura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasufumi", |
|
"middle": [], |
|
"last": "Taniguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hiroki Nakayama, Takahiro Kubo, Junya Kamura, Ya- sufumi Taniguchi, and Xu Liang. 2018. doccano: Text annotation tool for human. Software available from https://github.com/doccano/doccano.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Automated de-identification of free-text medical records", |
|
"authors": [ |
|
{ |
|
"first": "Ishna", |
|
"middle": [], |
|
"last": "Neamatullah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Margaret", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H Lehman", |
|
"middle": [], |
|
"last": "Douglass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Li-Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mauricio", |
|
"middle": [], |
|
"last": "Reisner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Villarroel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Szolovits", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Moody", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Roger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gari D", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Clifford", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "BMC medical informatics and decision making", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ishna Neamatullah, Margaret M Douglass, H Lehman Li-wei, Andrew Reisner, Mauricio Villarroel, William J Long, Peter Szolovits, George B Moody, Roger G Mark, and Gari D Clifford. 2008. Auto- mated de-identification of free-text medical records. BMC medical informatics and decision making, 8(1):32.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Approaches of anonymisation of an sms corpus", |
|
"authors": [ |
|
{ |
|
"first": "Namrata", |
|
"middle": [], |
|
"last": "Patel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Accorsi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Inkpen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C\u00e9dric", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathieu", |
|
"middle": [], |
|
"last": "Roche", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Namrata Patel, Pierre Accorsi, Diana Inkpen, C\u00e9dric Lopez, and Mathieu Roche. 2013. Approaches of anonymisation of an sms corpus. In International Conference on Intelligent Text Processing and Com- putational Linguistics, pages 77-88. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "412--418", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-2067" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara Plank, Anders S\u00f8gaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412- 418, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Regulation (eu) 2016/679 of the european parliament and of the council of 27 april 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46", |
|
"authors": [], |
|
"year": 2016, |
|
"venue": "Official Journal of the European Union", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "General Data Protection Regulation. 2016. Regulation (eu) 2016/679 of the european parliament and of the council of 27 april 2016 on the protection of natu- ral persons with regard to the processing of personal data and on the free movement of such data, and re- pealing directive 95/46. Official Journal of the Eu- ropean Union (OJ), 59(1-88):294.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "An overview of multi-task learning in", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "deep neural networks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1706.05098" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Erik", |
|
"middle": [ |
|
"Tjong" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Sang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fien", |
|
"middle": [], |
|
"last": "De Meulder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erik Tjong Kim Sang and Fien De Meulder. 2003. In- troduction to the conll-2003 shared task: Language- independent named entity recognition. In Proceed- ings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Portuguese named entity recognition using bert-crf", |
|
"authors": [ |
|
{ |
|
"first": "F\u00e1bio", |
|
"middle": [], |
|
"last": "Souza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rodrigo", |
|
"middle": [], |
|
"last": "Nogueira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Lotufo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.10649" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F\u00e1bio Souza, Rodrigo Nogueira, and Roberto Lotufo. 2019. Portuguese named entity recognition using bert-crf. arXiv preprint arXiv:1909.10649.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/uthealth shared task track 1", |
|
"authors": [ |
|
{ |
|
"first": "Amber", |
|
"middle": [], |
|
"last": "Stubbs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Kotfila", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Uzuner", |
|
"middle": [], |
|
"last": "And\u00f6zlem", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Journal of biomedical informatics", |
|
"volume": "58", |
|
"issue": "", |
|
"pages": "11--19", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amber Stubbs, Christopher Kotfila, and\u00d6zlem Uzuner. 2015. Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/uthealth shared task track 1. Journal of biomedical informatics, 58:S11-S19.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/UTHealth corpus", |
|
"authors": [ |
|
{ |
|
"first": "Amber", |
|
"middle": [], |
|
"last": "Stubbs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Uzuner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Journal of Biomedical Informatics", |
|
"volume": "58", |
|
"issue": "", |
|
"pages": "20--29", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.jbi.2015.07.020" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amber Stubbs and\u00d6zlem Uzuner. 2015. Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/UTHealth corpus. Journal of Biomedical Informatics, 58:S20-S29.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "State-of-the-art anonymization of medical records using an iterative machine learning framework", |
|
"authors": [ |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "Szarvas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich\u00e1rd", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00f3bert", |
|
"middle": [], |
|
"last": "Busa-Fekete", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "14", |
|
"issue": "5", |
|
"pages": "574--580", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gy\u00f6rgy Szarvas, Rich\u00e1rd Farkas, and R\u00f3bert Busa- Fekete. 2007. State-of-the-art anonymization of medical records using an iterative machine learning framework. Journal of the American Medical Infor- matics Association, 14(5):574-580.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Code and named entity recognition in stackoverflow", |
|
"authors": [ |
|
{ |
|
"first": "Jeniya", |
|
"middle": [], |
|
"last": "Tabassum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mounica", |
|
"middle": [], |
|
"last": "Maddela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeniya Tabassum, Mounica Maddela, Wei Xu, and Alan Ritter. 2020. Code and named entity recognition in stackoverflow. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Henk van den Heuvel, and Nelleke Oostdijk", |
|
"authors": [ |
|
{ |
|
"first": "Maaske", |
|
"middle": [], |
|
"last": "Treurniet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orphee", |
|
"middle": [], |
|
"last": "De Clercq", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2268--2273", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maaske Treurniet, Orphee De Clercq, Henk van den Heuvel, and Nelleke Oostdijk. 2012. Collection of a corpus of dutch sms. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2268-2273.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Comparing rule-based, feature-based and deep neural methods for de-identification of dutch medical records", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Trienes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Trieschnigg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Seifert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hiemstra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Health Search and Data Mining Workshop: Proceedings of the ACM WSDM 2020 Health Search and Data Mining Workshop co-located with the 13th ACM International WSDM Conference (WSDM 2020) Houston", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3--11", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J Trienes, D Trieschnigg, C Seifert, and D Hiemstra. 2020. Comparing rule-based, feature-based and deep neural methods for de-identification of dutch medical records. In Eickhoff, C.(ed.), Health Search and Data Mining Workshop: Proceedings of the ACM WSDM 2020 Health Search and Data Min- ing Workshop co-located with the 13th ACM Inter- national WSDM Conference (WSDM 2020) Hous- ton, Texas, USA, February 3, 2020, pages 3-11. [Sl]: CEUR.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Company type: Private 12. Technologies: c#, typescript, cad, 2d, 3d 13. Job description: 14. [XXX Organization ] is a modern multi tenant, microservices based solution and Floor Planning is one major functional solution vertical of the", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Likes: 0, Dislikes: 0, Love: 0 6. Salary: SALARY 7. Job type: FULL TIME 8. Experience level: Mid-Level, Senior, Lead 9", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "501--502", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Likes: 0, Dislikes: 0, Love: 0 6. Salary: SALARY 7. Job type: FULL TIME 8. Experience level: Mid-Level, Senior, Lead 9. Industry: Big Data, Cloud-Based Solutions, Enterprise Software 10. Company size: 501-1 11. Company type: Private 12. Technologies: c#, typescript, cad, 2d, 3d 13. Job description: 14. [XXX Organization ] is a modern multi tenant, microservices based solution and Floor Planning is one major functional solution vertical of the [XXX Organization ] platform. 15. What you'll be doing:", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "As a [XXX Profession ] for [XXX Organization ], you will be one of the founding members of our [XXX Location ] based floor planning development team", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "As a [XXX Profession ] for [XXX Organization ], you will be one of the founding members of our [XXX Location ] based floor planning development team.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "You will be in charge for development of future floor planning capabilities on the", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "You will be in charge for development of future floor planning capabilities on the [XXX Organization ] platform and be the software architect for the capability.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "You will drive the team to improve the coding practices and boost performance", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "You will drive the team to improve the coding practices and boost performance.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "Solid software design and development skills and at least 5 year experience in the industry 22. Good understanding of CAD type of software in 2D and 3D worlds 23. Experience on rendering technologies, 2D/3D data models and data types 24. Hands-on experience in implementing CAD designers / drafting / drawing tools for on", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "What you'll bring to the table: 21. Solid software design and development skills and at least 5 year experience in the industry 22. Good understanding of CAD type of software in 2D and 3D worlds 23. Experience on rendering technologies, 2D/3D data models and data types 24. Hands-on experience in implementing CAD designers / drafting / drawing tools for on-line use", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "TypeScript or Angular/React knowledge 26. Strong ambition to deliver great quality software and to continuously improve the way we do development 27. Good spoken and written English 28. Ability to work on-site in our [XXX Location ] office, with flexible remote work possibilities 29", |
|
"authors": [ |
|
{ |
|
"first": "C#", |
|
"middle": [], |
|
"last": "C++", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C#, C++, TypeScript or Angular/React knowledge 26. Strong ambition to deliver great quality software and to continuously improve the way we do development 27. Good spoken and written English 28. Ability to work on-site in our [XXX Location ] office, with flexible remote work possibilities 29. What we consider as an advantage:", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "Eagerness to find out and learn about the latest computer graphics technologies, and also to share your findings 31", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Knowledge of OpenDesign components", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eagerness to find out and learn about the latest computer graphics technologies, and also to share your findings 31. Knowledge of OpenDesign components (Teigha)", |
|
"links": null |
|
}, |
|
"BIBREF58": { |
|
"ref_id": "b58", |
|
"title": "What we offer you in return: 33. An international career and learning opportunities in a rapidly growing software company 34. A fun, ambitious, and committed team of smart people to work with 35. A respectful and professional", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "What we offer you in return: 33. An international career and learning opportunities in a rapidly growing software company 34. A fun, ambitious, and committed team of smart people to work with 35. A respectful and professional, yet easy-going atmosphere where individual thinking is encouraged", |
|
"links": null |
|
}, |
|
"BIBREF59": { |
|
"ref_id": "b59", |
|
"title": "you the one we're looking for? 39. Apply today and become a part of our [XXX Organization ] family! 40. You can apply by sending your cover letter and resume through the application form as soon as possible", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "you the one we're looking for? 39. Apply today and become a part of our [XXX Organization ] family! 40. You can apply by sending your cover letter and resume through the application form as soon as possible, but no later than 31st of August.", |
|
"links": null |
|
}, |
|
"BIBREF60": { |
|
"ref_id": "b60", |
|
"title": "Please note that we will fill this position as soon as we've found the right person", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Please note that we will fill this position as soon as we've found the right person, so we recommend that you act quickly. 42. If you have questions, [XXXName] ([XXX Contact ]) from our Recruitment team is happy to answer them.", |
|
"links": null |
|
}, |
|
"BIBREF61": { |
|
"ref_id": "b61", |
|
"title": "XXX Organization ] is a fast-growing software company developing products that help retail companies plan and operate more efficiently", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "description: 54. [XXX Organization ] is a fast-growing software company developing products that help retail companies plan and operate more efficiently.", |
|
"links": null |
|
}, |
|
"BIBREF62": { |
|
"ref_id": "b62", |
|
"title": "By accurately forecasting consumption of goods, we reduce inventory costs, increase availability and cut waste", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "By accurately forecasting consumption of goods, we reduce inventory costs, increase availability and cut waste.", |
|
"links": null |
|
}, |
|
"BIBREF63": { |
|
"ref_id": "b63", |
|
"title": "Helping retailers eliminate food spoilage and reduce fleet emissions from transportation has a significant environmental impact as well!", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helping retailers eliminate food spoilage and reduce fleet emissions from transportation has a significant environmental impact as well!", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Snippet of a job posting, full job posting can be found in Appendix A." |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Statistics of our JOBSTACK dataset." |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "...13. Job description: 14. [XXX Organization ] is a modern multi tenant, microservices based solution and Floor Planning is one major functional solution vertical of the [XXX Organization ] platform. 15. What you'll be doing: 16. As a [XXX Profession ] for [XXX Organization ], you will be one of the founding members of our [XXX Location ] based floor planning development team.17. You will be in charge for development of future floor planning capabilities on the [XXX Organization ] platform and be the software architect for the capability. 18. You will drive the team to improve the coding practices and boost performance. 19. You will also be a member of our [XXX Organization ] and have a major influence on feature roadmap and technologies we use. ..." |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td>Model</td><td>Auxiliary tasks</td><td>F1 Score</td><td>Precision</td><td>Recall</td></tr><tr><td/><td>JOBSTACK + CoNLL</td><td>81.90 \u00b1 0.32</td><td>86.91 \u00b1 1.94</td><td>77.49 \u00b1 1.87</td></tr><tr><td>Bilty + BERT base + CRF</td><td>JOBSTACK + I2B2</td><td>79.15 \u00b1 2.19</td><td>83.61 \u00b1 2.61</td><td>75.18 \u00b1 2.59</td></tr><tr><td/><td colspan=\"2\">JOBSTACK + CoNLL + I2B2 81.37 \u00b1 2.01</td><td>84.92 \u00b1 1.67</td><td>78.28 \u00b1 4.34</td></tr><tr><td/><td>JOBSTACK + CoNLL</td><td>58.62 \u00b1 1.46</td><td>79.34 \u00b1 2.34</td><td>46.54 \u00b1 1.99</td></tr><tr><td>Bilty + BERT Overflow + CRF</td><td>JOBSTACK + I2B2</td><td>55.99 \u00b1 1.93</td><td>72.03 \u00b1 6.48</td><td>46.10 \u00b1 2.55</td></tr><tr><td/><td colspan=\"2\">JOBSTACK + CoNLL + I2B2 59.15 \u00b1 2.15</td><td>71.20 \u00b1 4.80</td><td>50.86 \u00b1 3.31</td></tr><tr><td/><td>JOBSTACK + CoNLL</td><td colspan=\"2\">87.20 \u00b1 0.34 87.24 \u00b1 1.94</td><td>87.23 \u00b1 1.24</td></tr><tr><td>MaChAmp + BERT base + CRF</td><td>JOBSTACK + I2B2</td><td>86.64 \u00b1 0.53</td><td colspan=\"2\">88.44 \u00b1 0.84 84.92 \u00b1 0.44</td></tr><tr><td/><td colspan=\"2\">JOBSTACK + CoNLL + I2B2 86.06 \u00b1 0.66</td><td>86.13 \u00b1 0.50</td><td>86.00 \u00b1 0.87</td></tr><tr><td/><td>JOBSTACK + CoNLL</td><td>70.62 \u00b1 0.64</td><td>75.65 \u00b1 1.41</td><td>66.24 \u00b1 0.98</td></tr><tr><td>MaChAmp + BERT Overflow + CRF</td><td>JOBSTACK + I2B2</td><td>73.88 \u00b1 0.16</td><td>80.26 \u00b1 1.32</td><td>68.47 \u00b1 1.03</td></tr><tr><td/><td colspan=\"2\">JOBSTACK + CoNLL + I2B2 73.29 \u00b1 0.22</td><td>77.66 \u00b1 0.82</td><td>69.41 \u00b1 0.89</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Results on the development set across three runs using our JOBSTACK dataset." |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td>Model</td><td>Auxiliary tasks</td><td>F1 Score</td><td>Precision</td><td>Recall</td></tr><tr><td>Bilty + BERT base + CRF</td><td>JOBSTACK</td><td>78.99 \u00b1 0.32</td><td colspan=\"2\">82.44 \u00b1 0.95 75.90 \u00b1 1.39</td></tr><tr><td/><td>JOBSTACK</td><td>79.91 \u00b1 0.38</td><td>75.92 \u00b1 0.39</td><td>84.35 \u00b1 0.49</td></tr><tr><td>MaChAmp + BERT base + CRF</td><td>JOBSTACK + CoNLL JOBSTACK + I2B2</td><td>81.27 \u00b1 0.28 82.05</td><td>77.84 \u00b1 1.19</td><td>85.06 \u00b1 0.91</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "\u00b1 0.80 80.30 \u00b1 0.99 83.88 \u00b1 0.67 JOBSTACK + CoNLL + I2B2 81.47 \u00b1 0.43 77.66 \u00b1 0.58 85.68 \u00b1 0.57" |
|
} |
|
} |
|
} |
|
} |